pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_base_source_code_summarization_python_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_base_source_code_summarization_python_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_base_source_code_summarization_sql
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_base_source_code_summarization_sql_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_base_source_code_summarization_sql_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 base model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
feature-extraction
transformers
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{}
SEBIS/code_trans_t5_base_transfer_learning_pretrain
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 base model architecture. It was first released in this repository. ## Model description This CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn
[ "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 base model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n", "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 base model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-base' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_large_api_generation_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_large_api_generation_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 130,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/api%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_large_api_generation_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/code%20comment%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_large_code_comment_generation_java_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code comment generation java ================================================ Pretrained model on programming language java using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 25,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_large_code_comment_generation_java_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 25,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 25,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 25,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_go_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_go_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/go/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_go_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the go function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/java/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 180,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_java_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/java/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_java_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the javascript function/method. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/javascript/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the javascript function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/javascript/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_javascript_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V3-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V3-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_php_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 8000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 18,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_php_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 18,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 18,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 18,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_python_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_python_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/base_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.) Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.)\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. (We have trained in total 260,000 steps.)\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_large_code_documentation_generation_ruby_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_large_commit_generation_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_large_commit_generation_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 3,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/commit%20generation/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_large_commit_generation_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_large_program_synthese_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 220,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/program%20synthesis/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_large_program_synthese_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_program_synthese_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 3,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_large_program_synthese_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 large model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 3,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 3,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 3,500 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/csharp/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_large_source_code_summarization_csharp_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_large_source_code_summarization_csharp_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization Python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the Python function or be fine-tuned on other Python code tasks. It can be used on unparsed and untokenized Python code. However, if the Python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate Python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Training The model was trained on a single TPU Pod V3-8 for 80,000 steps, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. (We have trained in total 260,000 steps.) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | State of the art | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_large_source_code_summarization_python_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization Python ==================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the Python function or be fine-tuned on other Python code tasks. It can be used on unparsed and untokenized Python code. However, if the Python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate Python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Training The model was trained on a single TPU Pod V3-8 for 80,000 steps, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. (We have trained in total 260,000 steps.) Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate Python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Training\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. (We have trained in total 260,000 steps.)\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate Python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Training\n\n\nThe model was trained on a single TPU Pod V3-8 for 80,000 steps, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. (We have trained in total 260,000 steps.)\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_large_source_code_summarization_python_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_large_source_code_summarization_python_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 120,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_large_source_code_summarization_sql_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 100 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/large_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_large_source_code_summarization_sql_transfer_learning_finetune
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 large model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
feature-extraction
transformers
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 large model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-large` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{}
SEBIS/code_trans_t5_large_transfer_learning_pretrain
null
[ "transformers", "pytorch", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 large model architecture. It was first released in this repository. ## Model description This CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn
[ "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 large model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n", "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 large model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-large' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for 240,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Api Recommendation Generation dataset. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/api%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_small_api_generation
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on Api Recommendation Generation dataset. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/api%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_small_api_generation_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/api%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_small_api_generation_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the api recommendation generation task for the java apis. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for api recommendation generation Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis. ## Intended uses & limitations The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "parse the uses licence node of this package , if any , and returns the license definition if theres" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/api%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 68.71 | | CodeTrans-ST-Base | 70.45 | | CodeTrans-TF-Small | 68.90 | | CodeTrans-TF-Base | 72.11 | | CodeTrans-TF-Large | 73.26 | | CodeTrans-MT-Small | 58.43 | | CodeTrans-MT-Base | 67.97 | | CodeTrans-MT-Large | 72.29 | | CodeTrans-MT-TF-Small | 69.29 | | CodeTrans-MT-TF-Base | 72.89 | | CodeTrans-MT-TF-Large | **73.39** | | State of the art | 54.42 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "parse the uses licence node of this package , if any , and returns the license definition if theres"}]}
SEBIS/code_trans_t5_small_api_generation_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for api recommendation generation ================================================= Pretrained model for api recommendation generation using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the api recommendation generation task for the java apis. Intended uses & limitations --------------------------- The model could be used to generate api usage for the java programming tasks. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 1,400,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1,150,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing api recommendation generation data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Code Comment Generation dataset. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/code%20comment%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_comment_generation_java
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code comment generation java ================================================ Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on Code Comment Generation dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/code%20comment%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_comment_generation_java_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code comment generation java ================================================ Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code comment generation java ================================================ Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code comment generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code comment generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/code%20comment%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 37.98 | | CodeTrans-ST-Base | 38.07 | | CodeTrans-TF-Small | 38.56 | | CodeTrans-TF-Base | 39.06 | | CodeTrans-TF-Large | **39.50** | | CodeTrans-MT-Small | 20.15 | | CodeTrans-MT-Base | 27.44 | | CodeTrans-MT-Large | 34.69 | | CodeTrans-MT-TF-Small | 38.37 | | CodeTrans-MT-TF-Base | 38.90 | | CodeTrans-MT-TF-Large | 39.25 | | State of the art | 38.17 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_comment_generation_java_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code comment generation java ================================================ Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code comment generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus go dataset. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/go/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_go
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus go dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/go/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 340,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/go/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_go_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the go function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the go function/method. ## Intended uses & limitations The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_go_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/go/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_go_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation go ==================================================== Pretrained model on programming language go using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized go code functions: it works best with tokenized go functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the go function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate go function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing go code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus java dataset. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/java/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_java
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus java dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/java/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 400,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 400,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 400,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 400,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/java/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_java_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation java Pretrained model on programming language java using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method. ## Intended uses & limitations The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/java/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static < T , U > Function < T , U > castFunction ( Class < U > target ) { return new CastToClass < T , U > ( target ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_java_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation java ====================================================== Pretrained model on programming language java using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized java code functions: it works best with tokenized java functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the java function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the java function or be fine-tuned on other java code tasks. It can be used on unparsed and untokenized java code. However, if the java code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate java function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate java function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus javascript dataset. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/javascript/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_javascript
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus javascript dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/javascript/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the javascript function/method. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/javascript/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 32,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_javascript_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the javascript function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 32,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 32,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 32,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation javascript Pretrained model on programming language javascript using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method. ## Intended uses & limitations The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_javascript_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/javascript/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 40,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_javascript_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation javascript ============================================================ Pretrained model on programming language javascript using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized javascript code functions: it works best with tokenized javascript functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the javascript function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the javascript function or be fine-tuned on other javascript code tasks. It can be used on unparsed and untokenized javascript code. However, if the javascript code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 40,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 40,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate javascript function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 40,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing javascript code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus php dataset. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_php
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus php dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_php_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the php function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation php Pretrained model on programming language php using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized php code functions: it works best with tokenized php functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. ## Intended uses & limitations The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/php/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static function update ( $ table ) { if ( ! is_array ( $ table ) ) { $ table = json_decode ( $ table , true ) ; } if ( ! SchemaManager :: tableExists ( $ table [ 'oldName' ] ) ) { throw SchemaException :: tableDoesNotExist ( $ table [ 'oldName' ] ) ; } $ updater = new self ( $ table ) ; $ updater -> updateTable ( ) ; }"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_php_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation php ===================================================== Pretrained model on programming language php using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized php code functions: it works best with tokenized php functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the php function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the php function or be fine-tuned on other php code tasks. It can be used on unparsed and untokenized php code. However, if the php code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate php function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate php function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing php code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus python dataset. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_python
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus python dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_python_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the python function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 4000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def e ( message , exit_code = None ) : print_log ( message , YELLOW , BOLD ) if exit_code is not None : sys . exit ( exit_code )"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_python_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation python ======================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the python function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus ruby dataset. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/ruby/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_ruby
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus ruby dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/function%20documentation%20generation/ruby/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 420,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/function%20documentation%20generation/ruby/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code documentation generation task for the ruby function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for code documentation generation ruby Pretrained model on programming language ruby using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method. ## Intended uses & limitations The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/function%20documentation%20generation/ruby/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | Java | Go | Php | Ruby | JavaScript | | -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 | | CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 | | CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 | | CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 | | CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** | | CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 | | CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 | | CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 | | CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 | | CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 | | CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 | | State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end"}]}
SEBIS/code_trans_t5_small_code_documentation_generation_ruby_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for code documentation generation ruby ====================================================== Pretrained model on programming language ruby using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized ruby code functions: it works best with tokenized ruby functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the code documentation generation task for the ruby function/method. Intended uses & limitations --------------------------- The model could be used to generate the description for the ruby function or be fine-tuned on other ruby code tasks. It can be used on unparsed and untokenized ruby code. However, if the ruby code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate ruby function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing ruby code.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Git Commit Message Generation dataset. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/commit%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_small_commit_generation
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on Git Commit Message Generation dataset. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #has_space #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/commit%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_small_commit_generation_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 360,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/commit%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 8,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_small_commit_generation_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the git commit message generation task for the java commit changes. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 8,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 8,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 8,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for git commit message generation Pretrained model on git commit using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized git commit: it works best with tokenized git commit. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes. ## Intended uses & limitations The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_commit_generation_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/commit%20generation/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. ## Evaluation results For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Java | | -------------------- | :------------: | | CodeTrans-ST-Small | 39.61 | | CodeTrans-ST-Base | 38.67 | | CodeTrans-TF-Small | 44.22 | | CodeTrans-TF-Base | 44.17 | | CodeTrans-TF-Large | **44.41** | | CodeTrans-MT-Small | 36.17 | | CodeTrans-MT-Base | 39.25 | | CodeTrans-MT-Large | 41.18 | | CodeTrans-MT-TF-Small | 43.96 | | CodeTrans-MT-TF-Base | 44.19 | | CodeTrans-MT-TF-Large | 44.34 | | State of the art | 32.81 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "new file mode 100644 index 000000000 . . 892fda21b Binary files / dev / null and b / src / plugins / gateway / lib / joscar . jar differ"}]}
SEBIS/code_trans_t5_small_commit_generation_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for git commit message generation ================================================= Pretrained model on git commit using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized git commit: it works best with tokenized git commit. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the git commit message generation task for the java commit changes. Intended uses & limitations --------------------------- The model could be used to generate the git commit message for the git commit changes or be fine-tuned on other relevant tasks. It can be used on unparsed and untokenized commit changes. However, if the change is tokenized, the performance should be better. ### How to use Here is how to use this model to generate git commit message using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes. Evaluation results ------------------ For the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate git commit message using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 10,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing commit changes.\n\n\nEvaluation results\n------------------\n\n\nFor the git commit message generation task, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on Program Synthesis dataset. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/program%20synthesis/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_small_program_synthese
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on Program Synthesis dataset. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/program%20synthesis/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_small_program_synthese_multitask
null
[ "transformers", "pytorch", "tf", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #tf #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #tf #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 440,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. ## Intended uses & limitations The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/program%20synthesis/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. ## Evaluation results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_small_program_synthese_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Pretrained model on programming language lisp inspired DSL using the t5 small model architecture. It was first released in this repository. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. Intended uses & limitations --------------------------- The model could be used to generate lisp inspired DSL code given the human language description tasks. ### How to use Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. Evaluation results ------------------ For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 16,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation results\n------------------\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for program synthesis ## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [Citation Information](#citation-information) ## Model Details - **Model Description:** This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. - **Developed by:** [Ahmed Elnaggar](https://www.linkedin.com/in/prof-ahmed-elnaggar/),[Wei Ding](https://www.linkedin.com/in/wei-ding-92561270/) - **Model Type:** Summarization - **Language(s):** English - **License:** Unknown - **Resources for more information:** - [Research Paper](https://arxiv.org/pdf/2104.02443.pdf) - [GitHub Repo](https://github.com/agemagician/CodeTrans) ## How to Get Started With the Model Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "you are given an array of numbers a and a number b , compute the difference of elements in a and b" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/transfer%20learning%20fine-tuning/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Uses #### Direct Use The model could be used to generate lisp inspired DSL code given the human language description tasks. ## Risks, Limitations and Biases As detailed in this model’s [publication](https://arxiv.org/pdf/2104.02443.pdf), this model makes use of the data-set [One Billion Word Language Model Benchmark corpus](https://www.researchgate.net/publication/259239818_One_Billion_Word_Benchmark_for_Measuring_Progress_in_Statistical_Language_Modeling) in order to gather the self-supervised English data samples. Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). As such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark) reports that the One Billion Word Word Language Model Benchmark corpus > “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.” The aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus > contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples. [Ngo, Helen & Araújo et al(2021)](https://www.researchgate.net/publication/355582954_No_News_is_Good_News_A_Critique_of_the_One_Billion_Word_Benchmark) ## Training #### Training Data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) The authors provide additionally notes about the vocabulary used, in the [associated paper](https://arxiv.org/pdf/2104.02443.pdf): > We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output. ## Training procedure #### Preprocessing ##### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ###### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. ## Evaluation #### Results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | LISP | | -------------------- | :------------: | | CodeTrans-ST-Small | 89.43 | | CodeTrans-ST-Base | 89.65 | | CodeTrans-TF-Small | 90.30 | | CodeTrans-TF-Base | 90.24 | | CodeTrans-TF-Large | 90.21 | | CodeTrans-MT-Small | 82.88 | | CodeTrans-MT-Base | 86.99 | | CodeTrans-MT-Large | 90.27 | | CodeTrans-MT-TF-Small | **90.31** | | CodeTrans-MT-TF-Base | 90.30 | | CodeTrans-MT-TF-Large | 90.17 | | State of the art | 85.80 | ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** Nvidia RTX 8000 GPUs - **Hours used:** Unknown - **Cloud Provider:** GCC TPU v2-8 and v3-8. - **Compute Region:** Unknown - **Carbon Emitted:** Unknown ## Citation Information ```bibtex @misc{elnaggar2021codetrans, title={CodeTrans: Towards Cracking the Language of Silicon's Code Through Self-Supervised Deep Learning and High Performance Computing}, author={Ahmed Elnaggar and Wei Ding and Llion Jones and Tom Gibbs and Tamas Feher and Christoph Angerer and Silvia Severini and Florian Matthes and Burkhard Rost}, year={2021}, eprint={2104.02443}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
{"tags": ["summarization"], "widget": [{"text": "you are given an array of numbers a and a number b , compute the difference of elements in a and b"}]}
SEBIS/code_trans_t5_small_program_synthese_transfer_learning_finetune
null
[ "transformers", "pytorch", "tf", "jax", "t5", "feature-extraction", "summarization", "arxiv:2104.02443", "arxiv:1910.09700", "arxiv:2105.09680", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[ "2104.02443", "1910.09700", "2105.09680" ]
[]
TAGS #transformers #pytorch #tf #jax #t5 #feature-extraction #summarization #arxiv-2104.02443 #arxiv-1910.09700 #arxiv-2105.09680 #endpoints_compatible #has_space #text-generation-inference #region-us
CodeTrans model for program synthesis ===================================== Table of Contents ----------------- * Model Details * How to Get Started With the Model * Uses * Risks, Limitations and Biases * Training * Evaluation * Environmental Impact * Citation Information Model Details ------------- * Model Description: This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the program synthesis task for the lisp inspired DSL code. * Developed by: Ahmed Elnaggar,Wei Ding * Model Type: Summarization * Language(s): English * License: Unknown * Resources for more information: + Research Paper + GitHub Repo How to Get Started With the Model --------------------------------- Here is how to use this model to generate lisp inspired DSL code using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Uses ---- #### Direct Use The model could be used to generate lisp inspired DSL code given the human language description tasks. Risks, Limitations and Biases ----------------------------- As detailed in this model’s publication, this model makes use of the data-set One Billion Word Language Model Benchmark corpus in order to gather the self-supervised English data samples. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). As such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by Ngo, Helen & Araújo et al(2021) reports that the One Billion Word Word Language Model Benchmark corpus > > “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.” > > > The aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus > > contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples. > > > Ngo, Helen & Araújo et al(2021) Training -------- #### Training Data The supervised training tasks datasets can be downloaded on Link The authors provide additionally notes about the vocabulary used, in the associated paper: > > We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output. > > > Training procedure ------------------ #### Preprocessing ##### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ###### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data. Evaluation ---------- #### Results For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : Environmental Impact -------------------- Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper. * Hardware Type: Nvidia RTX 8000 GPUs * Hours used: Unknown * Cloud Provider: GCC TPU v2-8 and v3-8. * Compute Region: Unknown * Carbon Emitted: Unknown
[ "#### Direct Use\n\n\nThe model could be used to generate lisp inspired DSL code given the human language description tasks.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nAs detailed in this model’s publication, this model makes use of the data-set One Billion Word Language Model Benchmark corpus in order to gather the self-supervised English data samples.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\nAs such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by Ngo, Helen & Araújo et al(2021) reports that the One Billion Word Word Language Model Benchmark corpus\n\n\n\n> \n> “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.”\n> \n> \n> \n\n\nThe aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus\n\n\n\n> \n> contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples.\n> \n> \n> \n\n\nNgo, Helen & Araújo et al(2021)\n\n\nTraining\n--------", "#### Training Data\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output.\n> \n> \n> \n\n\nTraining procedure\n------------------", "#### Preprocessing", "##### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "###### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation\n----------", "#### Results\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n* Hardware Type: Nvidia RTX 8000 GPUs\n* Hours used: Unknown\n* Cloud Provider: GCC TPU v2-8 and v3-8.\n* Compute Region: Unknown\n* Carbon Emitted: Unknown" ]
[ "TAGS\n#transformers #pytorch #tf #jax #t5 #feature-extraction #summarization #arxiv-2104.02443 #arxiv-1910.09700 #arxiv-2105.09680 #endpoints_compatible #has_space #text-generation-inference #region-us \n", "#### Direct Use\n\n\nThe model could be used to generate lisp inspired DSL code given the human language description tasks.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nAs detailed in this model’s publication, this model makes use of the data-set One Billion Word Language Model Benchmark corpus in order to gather the self-supervised English data samples.\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).\nAs such, it should be noted that language models that are pretrained from text corpus such as the One Billion Word Word Language Model Benchmark corpus have been further explored (e.g by Ngo, Helen & Araújo et al(2021) reports that the One Billion Word Word Language Model Benchmark corpus\n\n\n\n> \n> “generate text in the linguistic style of news, without any grounding in the real world. In addition to potential harms from models which are inadvertently optimized for generating fake news.”\n> \n> \n> \n\n\nThe aforementioned publication continues to warn that the One Billion Word Word Language Model Benchmark corpus\n\n\n\n> \n> contains sentences which contain words commonly found on blocklists. While these sentences may have plausibly been used in expository contexts within the article, the destructive sentence-level preprocessing and shuffling applied to lm1b [One Billion Word Word Language Model Benchmark corpus] removes all long-range structure from the text and makes it infeasible to track the context and intent of individual examples.\n> \n> \n> \n\n\nNgo, Helen & Araújo et al(2021)\n\n\nTraining\n--------", "#### Training Data\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nThe authors provide additionally notes about the vocabulary used, in the associated paper:\n\n\n\n> \n> We used the SentencePiece model (Kudo, 2018) to construct the vocabulary for this research, as well as to decode and encode the input/output.\n> \n> \n> \n\n\nTraining procedure\n------------------", "#### Preprocessing", "##### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "###### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing lisp inspired DSL data.\n\n\nEvaluation\n----------", "#### Results\n\n\nFor the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n* Hardware Type: Nvidia RTX 8000 GPUs\n* Hours used: Unknown\n* Cloud Provider: GCC TPU v2-8 and v3-8.\n* Compute Region: Unknown\n* Carbon Emitted: Unknown" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_small_source_code_summarization_csharp
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization csharp dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_small_source_code_summarization_csharp_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the csharp code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization csharp Pretrained model on programming language csharp using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets. ## Intended uses & limitations The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/csharp/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "public static DateTime ParseUnixDateTime ( double unixTime ) { var dt = new DateTime ( CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , CODE_INTEGER , System . DateTimeKind . Utc ) ; dt = dt . AddSeconds ( unixTimeStamp ) . ToLocalTime ( ) ; return dt ; }"}]}
SEBIS/code_trans_t5_small_source_code_summarization_csharp_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization csharp ==================================================== Pretrained model on programming language csharp using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized csharp code functions: it works best with tokenized csharp functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the csharp code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the csharp function or be fine-tuned on other csharp code tasks. It can be used on unparsed and untokenized csharp code. However, if the csharp code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate csharp function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 2000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing csharp code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization python dataset. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_small_source_code_summarization_python
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization python dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 300,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_small_source_code_summarization_python_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 600 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization python Pretrained model on programming language python using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized python code functions: it works best with tokenized python functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. ## Intended uses & limitations The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = '''with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == " ; Include this text " : line = line + " Include below " out_file . write ( line ) ''' pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/python/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "'with open ( CODE_STRING , CODE_STRING ) as in_file : buf = in_file . readlines ( ) with open ( CODE_STRING , CODE_STRING ) as out_file : for line in buf : if line == \" ; Include this text \" : line = line + \" Include below \" out_file . write ( line ) '"}]}
SEBIS/code_trans_t5_small_source_code_summarization_python_transfer_learning_finetune
null
[ "transformers", "pytorch", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization python ==================================================== Pretrained model on programming language python using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized python code functions: it works best with tokenized python functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the python code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the python function or be fine-tuned on other python code tasks. It can be used on unparsed and untokenized python code. However, if the python code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate python function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate python function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 5000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing python code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used single-task training on source code summarization sql dataset. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/pre-training/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 460,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_multitask_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Multi-task Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Multi-task Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1200 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
summarization
transformers
# CodeTrans model for source code summarization sql Pretrained model on programming language sql using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized sql code functions: it works best with tokenized sql functions. ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. ## Intended uses & limitations The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline pipeline = SummarizationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune"), tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune", skip_special_tokens=True), device=0 ) tokenized_code = "select time ( col0 ) from tab0" pipeline([tokenized_code]) ``` Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/transfer%20learning%20fine-tuning/source%20code%20summarization/sql/small_model.ipynb). ## Training data The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1) ## Training procedure ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. ## Evaluation results For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : | Language / Model | Python | SQL | C# | | -------------------- | :------------: | :------------: | :------------: | | CodeTrans-ST-Small | 8.45 | 17.55 | 19.74 | | CodeTrans-ST-Base | 9.12 | 15.00 | 18.65 | | CodeTrans-TF-Small | 10.06 | 17.71 | 20.40 | | CodeTrans-TF-Base | 10.94 | 17.66 | 21.12 | | CodeTrans-TF-Large | 12.41 | 18.40 | 21.43 | | CodeTrans-MT-Small | 13.11 | 19.15 | 22.39 | | CodeTrans-MT-Base | **13.37** | 19.24 | 23.20 | | CodeTrans-MT-Large | 13.24 | 19.40 | **23.57** | | CodeTrans-MT-TF-Small | 12.10 | 18.25 | 22.03 | | CodeTrans-MT-TF-Base | 10.64 | 16.91 | 21.40 | | CodeTrans-MT-TF-Large | 12.14 | **19.98** | 21.10 | | CODE-NN | -- | 18.40 | 20.50 | > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{"tags": ["summarization"], "widget": [{"text": "select time ( col0 ) from tab0"}]}
SEBIS/code_trans_t5_small_source_code_summarization_sql_transfer_learning_finetune
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us
CodeTrans model for source code summarization sql ================================================= Pretrained model on programming language sql using the t5 small model architecture. It was first released in this repository. This model is trained on tokenized sql code functions: it works best with tokenized sql functions. Model description ----------------- This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. It is then fine-tuned on the source code summarization task for the sql code snippets. Intended uses & limitations --------------------------- The model could be used to generate the description for the sql function or be fine-tuned on other sql code tasks. It can be used on unparsed and untokenized sql code. However, if the sql code is tokenized, the performance should be better. ### How to use Here is how to use this model to generate sql function documentation using Transformers SummarizationPipeline: Run this example in colab notebook. Training data ------------- The supervised training tasks datasets can be downloaded on Link Training procedure ------------------ ### Transfer-learning Pretraining The model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Fine-tuning This model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code. Evaluation results ------------------ For the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score): Test results : > > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #summarization #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to generate sql function documentation using Transformers SummarizationPipeline:\n\n\nRun this example in colab notebook.\n\n\nTraining data\n-------------\n\n\nThe supervised training tasks datasets can be downloaded on Link\n\n\nTraining procedure\n------------------", "### Transfer-learning Pretraining\n\n\nThe model was trained on a single TPU Pod V3-8 for 500,000 steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Fine-tuning\n\n\nThis model was then fine-tuned on a single TPU Pod V2-8 for 1000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing sql code.\n\n\nEvaluation results\n------------------\n\n\nFor the source code summarization tasks, different models achieves the following results on different programming languages (in BLEU score):\n\n\nTest results :\n\n\n\n\n> \n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn\n> \n> \n>" ]
feature-extraction
transformers
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 small model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). ## Model description This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
{}
SEBIS/code_trans_t5_small_transfer_learning_pretrain
null
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[]
TAGS #transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us
# CodeTrans transfer learning pre-trained model Pretrained model on programming languages using the t5 small model architecture. It was first released in this repository. ## Model description This CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. The model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. It could be used to fine-tune other tasks in the software development domain. > Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn
[ "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 small model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #feature-extraction #endpoints_compatible #text-generation-inference #region-us \n", "# CodeTrans transfer learning pre-trained model\nPretrained model on programming languages using the t5 small model architecture. It was first released in\nthis repository.", "## Model description\n\nThis CodeTrans model is based on the 't5-small' model. It has its own SentencePiece vocabulary model. It used transfer-learning pre-training on 7 unsupervised datasets in the software development domain. \n\nThe model was trained on a single TPU Pod V3-8 for half million steps in total, using sequence length 512 (batch size 4096).\nIt has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.\nThe optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. \n\nIt could be used to fine-tune other tasks in the software development domain.\n\n\n> Created by Ahmed Elnaggar | LinkedIn and Wei Ding | LinkedIn" ]
text2text-generation
transformers
# legal_t5_small_cls_cs model Model for classification of legal text written in Cszech. It was first released in [this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis. ## Model description legal_t5_small_cls_cs is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. ## Intended uses & limitations The model could be used for classification of legal texts written in Cszech. ### How to use Here is how to use this model to classify legal text written in Cszech in PyTorch: ```python from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline pipeline = TranslationPipeline( model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_cls_cs"), tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_cls_cs", do_lower_case=False, skip_special_tokens=True), device=0 ) cs_text = "Bez námitek k navrhovanému spojení (Případ č. COMP/M.4169 – Virgin/CPW/JV) (2006/C 103/16) (Text s významem pro EHP) Dne 29. března 2006 se Komise rozhodla nevznést námitky proti výše uvedenému spojení a prohlásit ho za slučitelné se společným trhem. Toto rozhodnutí je založeno na čl. 6 odst. 1 písm. b) nařízení Rady (ES) č. 139/2004. Celý text rozhodnutí je přístupný pouze v angličtině a bude uveřejněn poté, co bude zbaven obchodního tajemství, které může případně obsahovat. Text bude dosažitelný: - na webové stránce Europa – hospodářská soutěž (http://europa.eu.int/comm/competition/mergers/cases/). Tato webová stránka umožňuje vyhledat jednotlivá rozhodnutí o spojení, a to včetně společnosti, čísla případu, data a indexu odvětví hospodářství. - v elektronické podobě na webové stránce EUR-Lex, pod dokumentem č. 32006M4169. EUR-Lex umožňuje přístup k Evropskému právu přes Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------" pipeline([cs_text], max_length=512) ``` ## Training data The legal_t5_small_cls_cs model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 18 Thousand texts. ## Training procedure The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining ## Evaluation results When the model is used for classification test dataset, achieves the following results: Test results : | Model | F1 score | |:-----:|:-----:| | legal_t5_small_cls_cs | 0.6297| ### BibTeX entry and citation info > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
{"language": "Cszech", "tags": ["classification Cszech model"], "datasets": ["jrc-acquis"], "widget": [{"text": "Bez n\u00e1mitek k navrhovan\u00e9mu spojen\u00ed (P\u0159\u00edpad \u010d. COMP/M.4169 \u2013 Virgin/CPW/JV) (2006/C 103/16) (Text s v\u00fdznamem pro EHP) Dne 29. b\u0159ezna 2006 se Komise rozhodla nevzn\u00e9st n\u00e1mitky proti v\u00fd\u0161e uveden\u00e9mu spojen\u00ed a prohl\u00e1sit ho za slu\u010diteln\u00e9 se spole\u010dn\u00fdm trhem. Toto rozhodnut\u00ed je zalo\u017eeno na \u010dl. 6 odst. 1 p\u00edsm. b) na\u0159\u00edzen\u00ed Rady (ES) \u010d. 139/2004. Cel\u00fd text rozhodnut\u00ed je p\u0159\u00edstupn\u00fd pouze v angli\u010dtin\u011b a bude uve\u0159ejn\u011bn pot\u00e9, co bude zbaven obchodn\u00edho tajemstv\u00ed, kter\u00e9 m\u016f\u017ee p\u0159\u00edpadn\u011b obsahovat. Text bude dosa\u017eiteln\u00fd: - na webov\u00e9 str\u00e1nce Europa \u2013 hospod\u00e1\u0159sk\u00e1 sout\u011b\u017e (http://europa.eu.int/comm/competition/mergers/cases/). Tato webov\u00e1 str\u00e1nka umo\u017e\u0148uje vyhledat jednotliv\u00e1 rozhodnut\u00ed o spojen\u00ed, a to v\u010detn\u011b spole\u010dnosti, \u010d\u00edsla p\u0159\u00edpadu, data a indexu odv\u011btv\u00ed hospod\u00e1\u0159stv\u00ed. - v elektronick\u00e9 podob\u011b na webov\u00e9 str\u00e1nce EUR-Lex, pod dokumentem \u010d. 32006M4169. EUR-Lex umo\u017e\u0148uje p\u0159\u00edstup k Evropsk\u00e9mu pr\u00e1vu p\u0159es Internet. (http://europa.eu.int/eur-lex/lex) --------------------------------------------------"}]}
SEBIS/legal_t5_small_cls_cs
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "classification Cszech model", "dataset:jrc-acquis", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
[]
[ "Cszech" ]
TAGS #transformers #pytorch #jax #t5 #text2text-generation #classification Cszech model #dataset-jrc-acquis #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
legal\_t5\_small\_cls\_cs model =============================== Model for classification of legal text written in Cszech. It was first released in this repository. This model is trained on three parallel corpus from jrc-acquis. Model description ----------------- legal\_t5\_small\_cls\_cs is based on the 't5-small' model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using 'dmodel = 512', 'dff = 2,048', 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters. Intended uses & limitations --------------------------- The model could be used for classification of legal texts written in Cszech. ### How to use Here is how to use this model to classify legal text written in Cszech in PyTorch: Training data ------------- The legal\_t5\_small\_cls\_cs model was trained on JRC-ACQUIS dataset consisting of 18 Thousand texts. Training procedure ------------------ The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training. ### Preprocessing An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model. ### Pretraining Evaluation results ------------------ When the model is used for classification test dataset, achieves the following results: Test results : ### BibTeX entry and citation info > > Created by Ahmed Elnaggar/@Elnaggar\_AI | LinkedIn > > >
[ "### How to use\n\n\nHere is how to use this model to classify legal text written in Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_cls\\_cs model was trained on JRC-ACQUIS dataset consisting of 18 Thousand texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for classification test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]
[ "TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #classification Cszech model #dataset-jrc-acquis #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### How to use\n\n\nHere is how to use this model to classify legal text written in Cszech in PyTorch:\n\n\nTraining data\n-------------\n\n\nThe legal\\_t5\\_small\\_cls\\_cs model was trained on JRC-ACQUIS dataset consisting of 18 Thousand texts.\n\n\nTraining procedure\n------------------\n\n\nThe model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.", "### Preprocessing\n\n\nAn unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.", "### Pretraining\n\n\nEvaluation results\n------------------\n\n\nWhen the model is used for classification test dataset, achieves the following results:\n\n\nTest results :", "### BibTeX entry and citation info\n\n\n\n> \n> Created by Ahmed Elnaggar/@Elnaggar\\_AI | LinkedIn\n> \n> \n>" ]