modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gvij/gpt-j-6B-alpaca-gpt4
|
gvij
| 2023-06-22T20:51:02Z | 5 | 0 |
peft
|
[
"peft",
"alpaca",
"gpt4",
"gpt-j",
"instruction",
"finetuning",
"lora",
"conversational",
"dataset:vicgalle/alpaca-gpt4",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-22T16:10:28Z |
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
pipeline_tag: conversational
tags:
- alpaca
- gpt4
- gpt-j
- instruction
- finetuning
- lora
- peft
---
GPT-J 6B model was finetuned on GPT-4 generations of the Alpaca prompts on [MonsterAPI](https://monsterapi.ai)'s no-code LLM finetuner, using LoRA for ~ 65,000 steps, auto-optmised to run on 1 A6000 GPU with no out of memory issues and without needing me to write any code or setup a GPU server with libraries to run this experiment. The finetuner does it all for us by itself.
Documentation on no-code LLM finetuner:
https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm

---
license: apache-2.0
---
|
christinacdl/clickbait_binary_detection
|
christinacdl
| 2023-06-22T20:44:50Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:christinacdl/clickbait_notclickbait_dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T14:56:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: clickbait_binary_detection
results: []
datasets:
- christinacdl/clickbait_notclickbait_dataset
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clickbait_binary_detection
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4630
- Macro F1: 0.9155
- Micro F1: 0.9215
- Accuracy: 0.9215
Performance on test set:
- Accuracy: 0.9257990867579908
- F1 score: 0.9199282431058413
- Precision: 0.9233793490724882
- Recall : 0.9168756883647268
- Matthews Correlation Coefficient: 0.8402298675576902
- Precision of each class: [0.931899 0.91485969]
- Recall of each class: [0.95152505 0.88222632]
- F1 score of each class: [0.94160977 0.89824671]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Micro F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|
| 0.2296 | 1.0 | 3650 | 0.2236 | 0.9105 | 0.9183 | 0.9183 |
| 0.228 | 2.0 | 7301 | 0.2708 | 0.9115 | 0.9192 | 0.9192 |
| 0.2075 | 3.0 | 10951 | 0.3141 | 0.9164 | 0.9224 | 0.9224 |
| 0.1881 | 4.0 | 14602 | 0.3211 | 0.9143 | 0.9201 | 0.9201 |
| 0.18 | 5.0 | 18252 | 0.3852 | 0.9130 | 0.9188 | 0.9188 |
| 0.1818 | 6.0 | 21903 | 0.3784 | 0.9110 | 0.9174 | 0.9174 |
| 0.1495 | 7.0 | 25553 | 0.4606 | 0.9106 | 0.9156 | 0.9156 |
| 0.1453 | 8.0 | 29204 | 0.4630 | 0.9155 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
sheshenin/vikash3
|
sheshenin
| 2023-06-22T20:40:31Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T20:36:53Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VikaSH3 Dreambooth model trained by sheshenin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Andre-M/Taxi-1
|
Andre-M
| 2023-06-22T20:36:23Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T20:36:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Andre-M/Taxi-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SALT-NLP/FLANG-Roberta
|
SALT-NLP
| 2023-06-22T20:31:41Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Financial Language Modelling",
"financial-sentiment-analysis",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-03T16:16:10Z |
---
language: en
tags:
- Financial Language Modelling
- financial-sentiment-analysis
widget:
- text: Stocks rallied and the British pound <mask>.
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-Roberta
FLANG-Roberta is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the RoBerta language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-Roberta related issues and questions.
---
license: afl-3.0
---
|
SALT-NLP/FLANG-ELECTRA
|
SALT-NLP
| 2023-06-22T20:30:50Z | 197 | 5 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"pretraining",
"Financial Language Modelling",
"financial-sentiment-analysis",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-06-24T01:45:20Z |
---
language: en
tags:
- Financial Language Modelling
- financial-sentiment-analysis
widget:
- text: Stocks rallied and the British pound <mask>.
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-ELECTRA
FLANG-ELECTRA is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the ELECTRA language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://paperswithcode.com/dataset/fin)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-ELECTRA related issues and questions.
---
license: afl-3.0
---
|
serpapi/bert-base-local-results
|
serpapi
| 2023-06-22T20:16:07Z | 115 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"scraping",
"parsing",
"serp",
"api",
"opensource",
"en",
"dataset:serpapi/local-results-en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T21:53:30Z |
---
language:
- en
pipeline_tag: text-classification
widget:
- title: Rating Example
text: '4.7'
- title: Reviews Example
text: (188)
- title: Reviews Example 2
text: '188'
- title: Reviews Example 3
text: No Reviews
- title: Price Example
text: $
- title: Type Example
text: Coffee shop
- title: Address Example
text: Frederick, MD
- title: Address Example 2
text: 552 W 48th St
- title: Address Example 3
text: In Hilton Hotel
- title: Hours Example
text: Closed
- title: Hours Example 2
text: Opens 7 AM Fri
- title: Hours Example 3
text: Permanently closed
- title: Service Option Example
text: Dine-in
- title: Service Option Example 2
text: Takeout
- title: Service Option Example 3
text: Delivery
- title: Phone Example
text: (301) 000-0000
- title: Years In Business Example
text: 5+ Years in Business
- title: Button Text Example
text: Directions
- title: Description Example
text: 'Provides: Auto maintenance'
license: mit
datasets:
- serpapi/local-results-en
tags:
- scraping
- parsing
- serp
- api
- opensource
---
<h1 align="center">BERT-Based Classification Model for Google Local Listings</h1>
<p align="center">
<img src="https://camo.githubusercontent.com/6c920f0b551360ca3257308e0f3547fe538496b9cb332d6a208992030abf6c3d/68747470733a2f2f736572706170692e636f6d2f616e64726f69642d6368726f6d652d353132783531322e706e67" alt="The Logo of SerpApi" width="200" height="200">
</p>
<p align="center">
This repository contains a BERT-based classification model developed using the Hugging Face library, and a dataset gathered by <a href='https://serpapi.com/google-local-api'>SerpApi's Google Local API</a>. The model is designed to classify different texts extracted from Google Local Listings.
</p>
<p align="center">
You may check out the blog post explaining the model's usecase with an example: <a href="https://serpapi.com/blog/real-world-example-of-ai-powered-parsing/">Real World Example of AI Powered Parsing</a>.
</p>
<p align="center">
You may also check out the Open Source Github Repository that contains the source code of a Ruby Gem called <a href="https://github.com/serpapi/google-local-results-ai-parser">`google-local-results-ai-parser`</a>.
</p>
---
<h2 align="center">Usage and Classification for Parsing</h2>
<p align="center">
The example code below represents using it Python with Inference API for prototyping. You may use different programming languages for calling the results, and you may parallelize your work. Prototyping endpoint will have limited amount of calls. For <code>Production Purposes</code> or <code>Large Prototyping Activities</code>, consider setting an <code>Inference API Endpoint from Huggingface</code>, or a <code>Private API Server</code> for serving the model.
</p>
```py
API_URL = "https://api-inference.huggingface.co/models/serpapi/bert-base-local-results"
headers = {"Authorization": "Bearer xxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "5540 N Lamar Blvd #12, Austin, TX 78756, United States",
})
```
```
Output: address
```
---
<h2 align="center">Strong Features</h2>
<div align="center">
<p>The BERT-based model excels in the following areas:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Differentiating difficult semantic similarities with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"No Reviews"</code> → <code>reviews</code></li>
<li><code>"(5K+)"</code> → <code>reviews</code></li>
</ul>
</li>
<li><strong>Handling partial texts that can be combined later</strong>
<ul style="list-style-type: disc;">
<li><code>"Open ⋅ Closes 5 pm"</code>
<ul style="list-style-type: circle;">
<li><code>"Open"</code> → <code>hours</code></li>
<li><code>"Closes 5 pm"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li><strong>Handling Vocabulary from diverse areas with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"Doctor"</code> → <code>type</code></li>
<li><code>"Restaurant"</code> → <code>type</code></li>
</ul>
</li>
<li><strong>Returning Assurance Score for After-Correction</strong>
<ul style="list-style-type: disc;">
<li><code>"4.7"</code> → <code>rating(0.999)</code></li>
</ul>
</li>
<li><strong>Strong Against Grammatical Mistakes</strong>
<ul style="list-style-type: disc;">
<li><code>"Krebside Pickup"</code> → <code>service options</code></li>
</ul>
</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Parts Covered and Corresponding Keys in SerpApi Parsers</h2>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Type of Place:</strong> <code>type</code></li>
<li><strong>Number of Reviews:</strong> <code>reviews</code></li>
<li><strong>Phone Number:</strong> <code>phone</code></li>
<li><strong>Rating:</strong> <code>rating</code></li>
<li><strong>Address:</strong> <code>address</code></li>
<li><strong>Operating Hours:</strong> <code>hours</code></li>
<li><strong>Description or Descriptive Review:</strong> <code>description</code></li>
<li><strong>Expensiveness:</strong> <code>expensiveness</code></li>
<li><strong>Service Options:</strong> <code>service options</code></li>
<li><strong>Button Text:</strong> <code>links</code></li>
<li><strong>Years in Business:</strong> <code>years_in_business</code></li>
</ul>
</div>
</div>
</ul>
</div>
<p align="center">
Please refer to the documentation of SerpApi's Google Local API and Google Local Pack API for more details on different parts:
</p>
<div align="center">
<strong>References:</strong>
<ul style="text-align: center; list-style-position: inside;">
<li>SerpApi's Google Local API: <a href ="https://serpapi.com/google-local-api">https://serpapi.com/google-local-api</a></li>
<li>SerpApi's Google Local Pack API: <a href="https://serpapi.com/local-pack">https://serpapi.com/local-pack</a></li>
</ul>
</div>
---
<h2 align="center">Known Limitations</h2>
<div align="center">
<p>The model has a few limitations that should be taken into account:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li>The model does not classify the title of a place. This is because the title often contains many elements that can be easily confused with other parts, even for a human eye.</li>
<li>The <code>label</code> key is not covered by the model, as it can be easily handled with traditional code.</li>
<li>In some cases, <code>button text</code> could be classified as <code>service options</code> or <code>address</code>. However, this can be easily avoided by checking if a text is in a button in the traditional part of the code. The button text is only used to prevent emergent cases.
<ul style="list-style-type: circle">
<li><code>"Delivery"</code> → <code>service options [Correct Label is button text]</code></li>
<li><code>"Share"</code> → <code>address [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify a portion of the <code>description</code> as <code>hours</code> if the description is about operating hours. For example:
<ul style="list-style-type: disc;">
<li><code>"Drive through: Open ⋅ Closes 12 AM"</code>
<ul style="list-style-type: circle">
<li><code>"Drive through: Open"</code> → <code>description</code></li>
<li><code>"Closes 12 AM"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li>In some cases, the model may classify some <code>description</code> as <code>type</code>. This is because some <code>description</code> do look like <code>type</code>. For Example:
<ul style="list-style-type: circle">
<li><code>"Iconic Seattle-based coffeehouse chain"</code> → <code>type [Correct Label is description]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>rating</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Expand more"</code> → <code>hours [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>type</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Takeaway"</code> → <code>type [Correct Label is service options]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>hours</code> or <code>price</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"(1.4K)"</code> → <code>rating [Correct Label is reviews]</code></li>
<li><code>"(1.6K)"</code> → <code>price [Correct Label is reviews]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>description</code> or <code>type</code>. The reason for the confusion on <code>description</code> is because of a recent change in their categorization in SerpApi keys. The data contains labels prior to that. For Example:
<ul style="list-style-type: circle">
<li><code>"On-site services"</code> → <code>type [Correct Label is service options]</code></li>
<li><code>"Online appointments"</code> → <code>description [Correct Label is service options]</code></li>
</ul>
</li>
<li>The model may be susceptible to error in one word entries. This is a minority of the cases, and it could be fixed with assurance scores. For Example:
<ul style="list-style-type: circle">
<li><code>"Sushi"</code> → <code>address(0.984), type(0.0493) [Correct Label is type]</code></li>
<li><code>"Diagorou 4"</code> → <code>address(0.999) [Correct address in same listing]</code></li>
</ul>
</li>
<li>The model cannot differentiate between extra parts that are extracted in SerpApi's Google Local API and Google Local Pack API. These parts are not feasible to extract via Classification Models.</li>
<li>The model is not designed for Listings outside English Language.</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Disclaimer</h2>
<p align="center">We value full transparency and painful honesty both in our internal and external communications. We believe a world with complete and open transparency is a better world.</p>
<p align="center">
However, while we strive for transparency, there are certain situations where sharing specific datasets may not be feasible or advisable. In the case of the dataset used to train our model, which contains different parts of a Google Local Listing including addresses and phone numbers, we have made a careful decision not to share it. We prioritize the well-being and safety of individuals, and sharing this dataset could potentially cause harm to people whose personal information is included.
</p>
<p align="center">
Protecting the privacy and security of individuals is of utmost importance to us. Disclosing personal information, such as addresses and phone numbers, without proper consent or safeguards could lead to privacy violations, identity theft, harassment, or other forms of misuse. Our commitment to responsible data usage means that we handle sensitive information with great care and take appropriate measures to ensure its protection.
</p>
<p align="center">
While we understand the value of transparency, we also recognize the need to strike a balance between transparency and safeguarding individuals' privacy and security. In this particular case, the potential harm that could result from sharing the dataset outweighs the benefits of complete transparency. By prioritizing privacy, we aim to create a safer and more secure environment for all individuals involved.
</p>
<p align="center">
We appreciate your understanding and support in our commitment to responsible and ethical data practices. If you have any further questions or concerns, please feel free to reach out to us.
</p>
|
ivy-tam/finetuned-whisper-base
|
ivy-tam
| 2023-06-22T20:15:04Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T15:28:56Z |
# finetuned-whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Superb (https://huggingface.co/datasets/superb) ASR dataset.
|
LeoDog896/asl-letters-yolov8
|
LeoDog896
| 2023-06-22T20:07:09Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2023-06-22T18:55:42Z |
---
license: mit
---
# asl-letters-yolov8
YOLOv8n model trained to detect the 26 letters of the American Sign Language Alphabet.
## Dataset
The [dataset](https://universe.roboflow.com/meredith-lo-pmqx7/asl-project) is from roboflow.
|
SlyEcho/open_llama_3b_ggml
|
SlyEcho
| 2023-06-22T20:02:38Z | 0 | 28 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-05-30T09:04:06Z |
---
license: apache-2.0
---
# ggml versions of OpenLLaMa 3B
- Version: 1T tokens final version
- Project: [OpenLLaMA: An Open Reproduction of LLaMA](https://github.com/openlm-research/open_llama)
- Model: [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b)
- [llama.cpp](https://github.com/ggerganov/llama.cpp): build 607(ffb06a3) or later
## Use with llama.cpp
Support is now merged to master branch.
## Newer quantizations
There are now more quantization types in llama.cpp, some lower than 4 bits.
Currently these are not supported, maybe because some weights have shapes that don't divide by 256.
## Perplexity on wiki.test.raw
| Q | chunk | 600BT | 1000BT |
| --: | --: | --: | --: |
| F16 | [616] | 8.4656 | 7.7861 |
| Q8_0 | [616] | 8.4667 | 7.7874 |
| Q5_1 | [616] | 8.5072 | 7.8424 |
| Q5_0 | [616] | 8.5156 | 7.8474 |
| Q4_1 | [616] | 8.6102 | 8.0483 |
| Q4_0 | [616] | 8.6674 | 8.0962 |
|
brotSchimmelt/falcon-7b-instruct-reddit-cmv-train-on-val
|
brotSchimmelt
| 2023-06-22T19:48:44Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T19:48:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
rxsong/New_BERT_class_o
|
rxsong
| 2023-06-22T19:40:25Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T19:22:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: New_BERT_class_o
results: []
widget:
- text: "We feel you and we care about you!"
- text: "I don't think I need to do anything."
- text: "As our global team focuses on producing critical medical devices and developing and deploying rapid diagnostic tests for COVID-19, BD is helping 7 non-profit partners advance their work to contain COVID-19, support healthcare workers and treat patients around the world."
- text: "...While we don't know if Covid19 is more than 2% lethal we have to remember that Family come first, before our work, before our social life and before our personal needs. We will all be tested soon as the disease spreads whether we care for ourselves or care for others."
- text: "@jeffiel with the words I was looking for. I acknowledge the pain Black Americans feel. I am here for you. And I understand the skepticism you have that meaningful change will arrive quickly. We will bend this system toward justice."
---
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# New_BERT_class_o
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1978
- Accuracy: 0.9167
- F1: 0.5192
- Precision: 0.8710
- Recall: 0.3699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 450 | 0.2800 | 0.88 | 0.0270 | 1.0 | 0.0137 |
| 0.3907 | 2.0 | 900 | 0.2469 | 0.89 | 0.1951 | 0.8889 | 0.1096 |
| 0.3389 | 3.0 | 1350 | 0.1978 | 0.9167 | 0.5192 | 0.8710 | 0.3699 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
catrabbitbear/Reinforce-cartpole-1
|
catrabbitbear
| 2023-06-22T19:40:18Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:40:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 451.60 +/- 145.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Or4cl3/Or4cl3
|
Or4cl3
| 2023-06-22T19:39:27Z | 15 | 2 |
transformers
|
[
"transformers",
"Or4cl3",
"code",
"text-generation",
"en",
"dataset:bigcode/the-stack",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/ta-prompt",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"arxiv:2306.03767",
"doi:10.57967/hf/0798",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-23T06:33:20Z |
---
datasets:
- bigcode/the-stack
- togethercomputer/RedPajama-Data-1T
- bigcode/ta-prompt
- anon8231489123/ShareGPT_Vicuna_unfiltered
metrics:
- code_eval
license: openrail
language:
- en
library_name: transformers
tags:
- code
pipeline_tag: text-generation
---
# Model Card for Or4cl3/Or4cl3
## Model Details
### Model Description
Or4cl3/Or4cl3 is a large language model (LLM) that was trained on a massive dataset of text and code. It can be used for a variety of tasks, including text generation, translation, summarization, question answering, and more.
### Model Sources
- Repository: https://huggingface.co/Or4cl3/Or4cl3
- Paper: https://arxiv.org/abs/2306.03767
- Demo: https://huggingface.co/Or4cl3/Or4cl3
## Uses
### Direct Use
Or4cl3/Or4cl3 can be used directly for a variety of tasks, such as text generation, translation, summarization, and question answering. For example, you can use it to generate text, translate languages, summarize text, or answer questions.
### Downstream Use
Or4cl3/Or4cl3 can also be used for downstream tasks, such as building chatbots, creating virtual assistants, and generating creative content. For example, you can use it to build a chatbot that can have conversations with users, create a virtual assistant that can help users with tasks, or generate creative content such as poems, code, scripts, musical pieces, email, letters, etc.
### Out-of-Scope Use
Or4cl3/Or4cl3 is not intended for use in any applications that could harm or endanger people, such as weapons, medical devices, or self-driving cars.
## Bias, Risks, and Limitations
Or4cl3/Or4cl3 is a large language model, and as such, it is subject to a number of biases, risks, and limitations. These include:
* **Bias:** Or4cl3/Or4cl3 was trained on a massive dataset of text and code, and as such, it may reflect the biases that exist in that dataset. For example, it may be more likely to generate text that is biased towards men or that promotes harmful stereotypes.
* **Risk:** Or4cl3/Or4cl3 is a powerful tool, and as such, it can be used for malicious purposes. For example, it could be used to generate spam, create fake news, or spread misinformation.
* **Limitations:** Or4cl3/Or4cl3 is not perfect, and it can make mistakes. For example, it may generate text that is factually incorrect or that is not grammatically correct.
## How to Get Started with the Model
To get started with Or4cl3/Or4cl3, you can install it from Hugging Face Hub:
pip install transformers
```
Once you have installed the transformers library, you can load the model using the following code:
```
from transformers import AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("Or4cl3/Or4cl3")
Now you can use the model to generate text, translate languages, summarize text, or answer questions. For example, to generate text, you can use the following code:
text = model.generate(prompt="Write a poem about a flower.")
This will generate text that is similar to a poem about a flower.
## Training Details
### Training Data
Or4cl3/Or4cl3 was trained on a massive dataset of text and code. The dataset includes text from books, articles, code, and other sources.
### Training Procedure
Or4cl3/Or4cl3 was trained using a technique called supervised learning. In supervised learning, the model is given a set of input data and a set of output data. The model learns to map the input data to the output data.
In the case of Or4cl3/Or4cl3, the input data was the text and code from the training dataset. The output data was the text that was generated from the text and code.
The model was trained using a technique called the transformer. The transformer is a neural network architecture that is well-suited for natural language processing tasks.
### Training Hyperparameters
The training of Or4cl3/Or4cl3 was hyperparameter-tuned. This means that the parameters of the model were adjusted to optimize the performance of the model.
The hyperparameters that were tuned include the learning rate, the batch size, and the number of epochs.
The learning rate is the rateSure, here is the rest of the model card for Or4cl3/Or4cl3:
### Training Hyperparameters
The training of Or4cl3/Or4cl3 was hyperparameter-tuned. This means that the parameters of the model were adjusted to optimize the performance of the model.
The hyperparameters that were tuned include the learning rate, the batch size, and the number of epochs.
* **Learning rate:** The learning rate is the rate at which the model updates its parameters. A higher learning rate will cause the model to learn more quickly, but it may also cause the model to overfit the training data.
* **Batch size:** The batch size is the number of examples that are processed at once. A larger batch size will require more memory, but it may also improve the performance of the model.
* **Number of epochs:** The number of epochs is the number of times that the model is trained on the entire training dataset. A larger number of epochs will cause the model to learn more, but it may also cause the model to overfit the training data.
### Evaluation
Or4cl3/Or4cl3 was evaluated on a variety of tasks, including text generation, translation, summarization, and question answering. The model achieved state-of-the-art results on many of these tasks.
### Model Examination
Or4cl3/Or4cl3 was examined for interpretability. This means that the model was analyzed to understand how it makes its predictions. The model was found to be interpretable, which means that it is possible to understand why the model makes the predictions that it does.
### Environmental Impact
Or4cl3/Or4cl3 was trained on a massive dataset of text and code. The training of the model required a significant amount of computing resources. The environmental impact of the training of the model was estimated to be 1000 kg of CO2 emissions.
### Technical Specifications
Or4cl3/Or4cl3 is a large language model with 137 billion parameters. The model was trained on a TPUv4 pod using the TensorFlow framework. The model is available for inference on the Hugging Face Hub.
### Citation
To cite Or4cl3/Or4cl3, please use the following citation:
```
@article{or4cl32023or4cl3,
title={Or4cl3/Or4cl3: A Large Language Model for Natural Language Processing},
author={Dustin Groves},
journal={arXiv preprint arXiv:2306.03767},
year={2023}
}
```
### Glossary
* **Bias:** Bias is a systematic error in a model that causes it to make incorrect predictions.
* **Risk:** Risk is the possibility that a model will be used for malicious purposes.
* **Limitations:** Limitations are the ways in which a model is not perfect.
* **Transformer:** The transformer is a neural network architecture that is well-suited for natural language processing tasks.
|
sid/q-Taxi-V3
|
sid
| 2023-06-22T19:37:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:37:34Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sid/q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sid/q-FrozenLake-v1-4x4-noSlippery
|
sid
| 2023-06-22T19:36:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:36:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sid/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
fx1H/ppo-SnowballTarget
|
fx1H
| 2023-06-22T19:24:15Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:24:14Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: fx1H/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
denisws/ppo-Huggy
|
denisws
| 2023-06-22T19:23:51Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:23:33Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: denisws/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
bonzo1971/roberta-base-bne-finetuned-amazon_reviews_multi
|
bonzo1971
| 2023-06-22T19:20:38Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T18:59:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1943 | 1.0 | 1250 | 0.1669 | 0.9327 |
| 0.0982 | 2.0 | 2500 | 0.2219 | 0.9333 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
philippeVarme/ppo-Huggy
|
philippeVarme
| 2023-06-22T19:12:28Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:12:18Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: philippeVarme/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
fx1H/Reinforce_Agent_Playing_Pixelcopter-PLE-v0
|
fx1H
| 2023-06-22T19:09:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:08:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_Playing_Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.43 +/- 8.32
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JoaoYukio/ppo-Huggy
|
JoaoYukio
| 2023-06-22T19:09:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:59:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoaoYukio/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
webstels/nekta_help_tc
|
webstels
| 2023-06-22T18:42:00Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T13:23:52Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nekta_help_tc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nekta_help_tc
This model is a fine-tuned version of [webstels/nekta_help_tc](https://huggingface.co/webstels/nekta_help_tc) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0145
- Accuracy: 0.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 341 | 0.7823 | 0.7767 |
| 1.61 | 2.0 | 682 | 0.5028 | 0.8367 |
| 0.6434 | 3.0 | 1023 | 0.3594 | 0.8667 |
| 0.6434 | 4.0 | 1364 | 0.2428 | 0.9133 |
| 0.3982 | 5.0 | 1705 | 0.1740 | 0.94 |
| 0.2816 | 6.0 | 2046 | 0.1388 | 0.9367 |
| 0.2816 | 7.0 | 2387 | 0.0960 | 0.97 |
| 0.1886 | 8.0 | 2728 | 0.0430 | 0.99 |
| 0.1388 | 9.0 | 3069 | 0.0490 | 0.9833 |
| 0.1388 | 10.0 | 3410 | 0.0332 | 0.9867 |
| 0.1009 | 11.0 | 3751 | 0.0222 | 0.9933 |
| 0.0718 | 12.0 | 4092 | 0.0253 | 0.9867 |
| 0.0718 | 13.0 | 4433 | 0.0156 | 0.9933 |
| 0.0572 | 14.0 | 4774 | 0.0162 | 0.9967 |
| 0.0476 | 15.0 | 5115 | 0.0211 | 0.9933 |
| 0.0476 | 16.0 | 5456 | 0.0135 | 0.9933 |
| 0.0369 | 17.0 | 5797 | 0.0125 | 0.9967 |
| 0.0309 | 18.0 | 6138 | 0.0206 | 0.9933 |
| 0.0309 | 19.0 | 6479 | 0.0159 | 0.9933 |
| 0.0248 | 20.0 | 6820 | 0.0145 | 0.9933 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chreh/ppo-LunarLander-v2
|
chreh
| 2023-06-22T18:38:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:37:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 202.58 +/- 64.11
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Curiolearner/Reinforce-CartPole-v1
|
Curiolearner
| 2023-06-22T18:37:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:36:41Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rightspeed/ppo-LunarLander-v2
|
rightspeed
| 2023-06-22T18:35:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:34:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.46 +/- 21.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fx1H/Reinforce_Agent_Playing-CartPole-v1
|
fx1H
| 2023-06-22T18:32:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:32:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_Playing-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 193.10 +/- 21.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sxandie/NER2.0.1-dataset
|
sxandie
| 2023-06-22T18:32:07Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-22T17:19:10Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/NER2.0.1-dataset
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/NER2.0.1-dataset
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0892
- Validation Loss: 0.1364
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2725 | 0.1879 | 0 |
| 0.1604 | 0.1563 | 1 |
| 0.1234 | 0.1496 | 2 |
| 0.1015 | 0.1404 | 3 |
| 0.0892 | 0.1364 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
yanaayanaayanaa/febrianilora
|
yanaayanaayanaa
| 2023-06-22T18:14:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T17:45:51Z |
---
license: creativeml-openrail-m
---
|
rogelioplatt/roberta-base-bne-finetuned-Tass2020
|
rogelioplatt
| 2023-06-22T18:03:15Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T18:01:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-bne-finetuned-Tass2020
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-Tass2020
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1447
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9512 | 1.0 | 15 | 3.4947 |
| 3.37 | 2.0 | 30 | 2.9933 |
| 3.1298 | 3.0 | 45 | 3.1546 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AustinCarthy/Baseline_100Kphish_benignFall_9.5_20_20
|
AustinCarthy
| 2023-06-22T17:36:12Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T11:56:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_100Kphish_benignFall_9.5_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_100Kphish_benignFall_9.5_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_phish_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0498
- Accuracy: 0.9974
- F1: 0.9720
- Precision: 0.9987
- Recall: 0.9466
- Roc Auc Score: 0.9733
- Tpr At Fpr 0.01: 0.953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0142 | 1.0 | 16407 | 0.0389 | 0.9974 | 0.9719 | 0.9958 | 0.9492 | 0.9745 | 0.9348 |
| 0.0111 | 2.0 | 32814 | 0.0376 | 0.9977 | 0.9751 | 0.9975 | 0.9536 | 0.9767 | 0.951 |
| 0.0022 | 3.0 | 49221 | 0.0328 | 0.9981 | 0.9794 | 0.9961 | 0.9632 | 0.9815 | 0.9512 |
| 0.0 | 4.0 | 65628 | 0.0438 | 0.9977 | 0.9758 | 0.9985 | 0.954 | 0.9770 | 0.9566 |
| 0.0005 | 5.0 | 82035 | 0.0498 | 0.9974 | 0.9720 | 0.9987 | 0.9466 | 0.9733 | 0.953 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ravimehta/Test
|
ravimehta
| 2023-06-22T17:35:55Z | 0 | 0 |
asteroid
|
[
"asteroid",
"summarization",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"region:us"
] |
summarization
| 2023-06-22T17:34:38Z |
---
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
metrics:
- bleurt
library_name: asteroid
pipeline_tag: summarization
---
|
gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
|
gokuls
| 2023-06-22T17:15:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-20T09:59:23Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4340
- Accuracy: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.8468 | 0.08 | 10000 | 3.6051 | 0.4101 |
| 3.6009 | 0.16 | 20000 | 3.3734 | 0.4369 |
| 3.4559 | 0.25 | 30000 | 3.2348 | 0.4517 |
| 3.3578 | 0.33 | 40000 | 3.1395 | 0.4623 |
| 3.2803 | 0.41 | 50000 | 3.0632 | 0.4709 |
| 3.2157 | 0.49 | 60000 | 3.0010 | 0.4780 |
| 3.1503 | 0.57 | 70000 | 2.9554 | 0.4838 |
| 3.1044 | 0.66 | 80000 | 2.9104 | 0.4888 |
| 3.0703 | 0.74 | 90000 | 2.8759 | 0.4931 |
| 3.029 | 0.82 | 100000 | 2.8357 | 0.4976 |
| 2.9907 | 0.9 | 110000 | 2.8082 | 0.5013 |
| 2.9619 | 0.98 | 120000 | 2.7805 | 0.5042 |
| 2.9284 | 1.07 | 130000 | 2.7578 | 0.5072 |
| 2.9027 | 1.15 | 140000 | 2.7295 | 0.5103 |
| 2.8738 | 1.23 | 150000 | 2.7094 | 0.5133 |
| 2.8603 | 1.31 | 160000 | 2.6848 | 0.5160 |
| 2.829 | 1.39 | 170000 | 2.6667 | 0.5185 |
| 2.8106 | 1.47 | 180000 | 2.6479 | 0.5208 |
| 2.7942 | 1.56 | 190000 | 2.6304 | 0.5227 |
| 2.772 | 1.64 | 200000 | 2.6156 | 0.5249 |
| 2.7546 | 1.72 | 210000 | 2.5994 | 0.5270 |
| 2.7348 | 1.8 | 220000 | 2.5858 | 0.5290 |
| 2.725 | 1.88 | 230000 | 2.5728 | 0.5304 |
| 2.7116 | 1.97 | 240000 | 2.5587 | 0.5324 |
| 2.6953 | 2.05 | 250000 | 2.5476 | 0.5338 |
| 2.6883 | 2.13 | 260000 | 2.5339 | 0.5355 |
| 2.6768 | 2.21 | 270000 | 2.5231 | 0.5371 |
| 2.6622 | 2.29 | 280000 | 2.5097 | 0.5383 |
| 2.6499 | 2.38 | 290000 | 2.5026 | 0.5396 |
| 2.6361 | 2.46 | 300000 | 2.4916 | 0.5412 |
| 2.629 | 2.54 | 310000 | 2.4843 | 0.5421 |
| 2.6269 | 2.62 | 320000 | 2.4737 | 0.5432 |
| 2.6175 | 2.7 | 330000 | 2.4676 | 0.5443 |
| 2.5961 | 2.79 | 340000 | 2.4580 | 0.5457 |
| 2.5926 | 2.87 | 350000 | 2.4502 | 0.5468 |
| 2.5866 | 2.95 | 360000 | 2.4413 | 0.5480 |
| 2.5781 | 3.03 | 370000 | 2.4340 | 0.5488 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
JUNYIDA/my_awesome_model
|
JUNYIDA
| 2023-06-22T16:56:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T15:26:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8555347091932458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4633
- Accuracy: 0.8555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3945 | 1.0 | 534 | 0.3473 | 0.8527 |
| 0.2174 | 2.0 | 1068 | 0.4633 | 0.8555 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
VMware/bert-large-mrqa
|
VMware
| 2023-06-22T16:36:05Z | 173 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T20:46:18Z |
---
license: apache-2.0
datasets:
- mrqa
language:
- en
metrics:
- exact_match
- f1
model-index:
- name: VMware/bert-large-mrqa
results:
- task:
type: Question-Answering
dataset:
type: mrqa
name: MRQA
metrics:
- type: exact_match
value: 69.52
name: Eval EM
- type: f1
value: 80.50
name: Eval F1
- type: exact_match
value: 55.00
name: Test EM
- type: f1
value: 65.78
name: Test F1
---
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** BERT-Large-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [BERT-Large-uncased](https://huggingface.co/bert-large-uncased)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 28:35:38 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/bert-large-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.864973783493042,
# 'start': 30,
# 'end': 68,
# 'answer': 'Machine Reading for Question Answering'
# }
```
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"bert-large-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=8,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
|TinyRoBERTa | 81,529.346 | 4:27:06 *| 9:54 | 1:04 | 69.38 | 80.07| 53.29| 64.16|
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
imnanan/test-meli
|
imnanan
| 2023-06-22T16:30:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T15:44:39Z |
---
license: creativeml-openrail-m
---
|
UnaiGurbindo/dqn-SpaceInvadersNoFrameskip-v4
|
UnaiGurbindo
| 2023-06-22T16:24:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:07:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 507.00 +/- 141.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga UnaiGurbindo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga UnaiGurbindo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga UnaiGurbindo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Heefy/Emma
|
Heefy
| 2023-06-22T16:17:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T16:17:50Z |
---
license: creativeml-openrail-m
---
|
aminramezani345/finetuning-sentiment-model-3000-samples
|
aminramezani345
| 2023-06-22T16:11:54Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-05T15:28:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8766666666666667
- name: F1
type: f1
value: 0.8786885245901639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3040
- Accuracy: 0.8767
- F1: 0.8787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Niftynr/falcon-7b-e_100
|
Niftynr
| 2023-06-22T16:10:52Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T16:10:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
brunoleme/my_awesome_eli5_clm-model
|
brunoleme
| 2023-06-22T16:02:27Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T15:00:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8709 | 1.0 | 1113 | 3.7946 |
| 3.7741 | 2.0 | 2226 | 3.7780 |
| 3.7275 | 3.0 | 3339 | 3.7753 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Akazi/resnet_c_s_redwood_finetuned
|
Akazi
| 2023-06-22T15:42:14Z | 13 | 0 |
transformers
|
[
"transformers",
"image-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-21T19:36:23Z |
---
license: mit
pipeline_tag: image-classification
---
# Finetuned ResNet Model for Park Image Classification
## Overview
This model is a finetuned ResNet model that has been trained on images from a local park in Pleasanton.
## Dataset
The training dataset consists of two classes: "Sierra Redwood" and "Coastal Redwood". The dataset contains images captured within the park in Pleasanton.
## Model Architecture
The model architecture used for this classification task is ResNet-50.
## Usage
To use this model, you can follow these steps:
1. Install the required dependencies, including PyTorch and torchvision.
2. Download the model file "resnet_park_redwood_finetuned.tar.gz" from the provided link.
3. Load the model using the `torch.load` function and extract the model weights.
4. Prepare your input image by resizing it to 224x224 pixels and applying the necessary transformations (e.g., normalization).
5. Pass the preprocessed image through the model to obtain the predicted class probabilities.
6. Optionally, apply softmax to the predicted probabilities to obtain normalized scores.
7. The model will output the predicted label (Sierra Redwood or Coastal Redwood) along with the corresponding probability scores.
## Example Code
Here's an example code snippet for using the finetuned ResNet model:
```python
from transformers import AutoModelForImageClassification, AutoImagePipeline
model_name = "resnet_c_s_redwood_finetuned"
image_path = "path/to/your/image.jpg"
# Load the pre-trained model
model = AutoModelForImageClassification.from_pretrained(model_name)
# Create an image classification pipeline
classifier = AutoImagePipeline(model=model, model_name=model_name)
# Perform image classification
result = classifier(image_path)
# Print the predicted label
predicted_label = result[0]['label']
print(predicted_label)
```
|
Mtc2/q-Taxi-v3
|
Mtc2
| 2023-06-22T15:34:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:34:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Mtc2/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HanBi/my_awesome_model
|
HanBi
| 2023-06-22T15:29:00Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:movie_rationales",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T09:21:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- movie_rationales
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: movie_rationales
type: movie_rationales
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8844221105527639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the movie_rationales dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2762
- Accuracy: 0.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.4182 | 0.8040 |
| No log | 2.0 | 200 | 0.2762 | 0.8844 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Niftynr/falcon-7b-faq
|
Niftynr
| 2023-06-22T15:25:23Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T14:17:26Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
swl-models/MsceneMix-v1.0
|
swl-models
| 2023-06-22T15:24:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T15:17:26Z |
---
license: creativeml-openrail-m
---
|
ufal/byt5-small-multilexnorm2021-hr
|
ufal
| 2023-06-22T15:19:05Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"lexical normalization",
"hr",
"dataset:mc4",
"dataset:wikipedia",
"dataset:multilexnorm",
"arxiv:2105.13626",
"arxiv:1907.06292",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
language: hr
datasets:
- mc4
- wikipedia
- multilexnorm
tags:
- lexical normalization
license: apache-2.0
---
# Fine-tuned ByT5-small for MultiLexNorm (Croatian version)

This is the official release of the fine-tuned models for **the winning entry** to the [*W-NUT 2021: Multilingual Lexical Normalization (MultiLexNorm)* shared task](https://noisy-text.github.io/2021/multi-lexnorm.html), which evaluates lexical-normalization systems on 12 social media datasets in 11 languages.
Our system is based on [ByT5](https://arxiv.org/abs/2105.13626), which we first pre-train on synthetic data and then fine-tune on authentic normalization data. It achieves the best performance by a wide margin in intrinsic evaluation, and also the best performance in extrinsic evaluation through dependency parsing. In addition to these fine-tuned models, we also release the source files on [GitHub](https://github.com/ufal/multilexnorm2021) and an interactive demo on [Google Colab](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing).
## How to use
The model was *not* fine-tuned in a standard sentence-to-sentence setting – instead, it was tailored to the token-to-token definition of MultiLexNorm data. Please refer to [**the interactive demo on Colab notebook**](https://colab.research.google.com/drive/1rxpI8IlKk-D2crFqi2hdzbTBIezqgsCg?usp=sharing) to learn how to use these models.
## How to cite
```bibtex
@inproceedings{wnut-ufal,
title= "{ÚFAL} at {MultiLexNorm} 2021: Improving Multilingual Lexical Normalization by Fine-tuning {ByT5}",
author = "Samuel, David and Straka, Milan",
booktitle = "Proceedings of the 7th Workshop on Noisy User-generated Text (W-NUT 2021)",
year = "2021",
publisher = "Association for Computational Linguistics",
address = "Punta Cana, Dominican Republic"
}
```
## ByT5 - Small
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-small).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-small` significantly outperforms [mt5-small](https://huggingface.co/google/mt5-small) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
|
rudzhRjwu/my_awesome_eli5_clm-model
|
rudzhRjwu
| 2023-06-22T15:01:11Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:45:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8651 | 1.0 | 1116 | 3.7546 |
| 3.7676 | 2.0 | 2232 | 3.7417 |
| 3.7342 | 3.0 | 3348 | 3.7374 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
jondurbin/airoboros-7b-gpt4-1.1
|
jondurbin
| 2023-06-22T14:59:23Z | 1,425 | 5 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-09T23:30:55Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.1
---
## Delta 1.1
This is a minor update of https://huggingface.co/jondurbin/airoboros-7b-gpt4 with ~1k more coding instructions, and fixes/improvements to context instructions from https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1
The remainder of the model card is duplicated from the origin.
## Overview
This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-65b-gpt4-1.2
|
jondurbin
| 2023-06-22T14:59:15Z | 1,424 | 22 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-14T09:19:02Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.1), but with a 65b model and thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-33b-gpt4-1.2
|
jondurbin
| 2023-06-22T14:59:08Z | 1,436 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-14T09:19:13Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.1) with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-33B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-13b-gpt4-1.3
|
jondurbin
| 2023-06-22T14:58:31Z | 1,431 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T07:08:57Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-7b-gpt4-1.3
|
jondurbin
| 2023-06-22T14:58:20Z | 1,429 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T07:09:09Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
Serendipity34/my_awesome_eli5_clm-model
|
Serendipity34
| 2023-06-22T14:56:27Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T12:15:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8711 | 1.0 | 1134 | 3.7645 |
| 3.7705 | 2.0 | 2268 | 3.7486 |
| 3.7324 | 3.0 | 3402 | 3.7448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
swl-models/CuteYukiMix-v3.0
|
swl-models
| 2023-06-22T14:49:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:19Z |
---
license: creativeml-openrail-m
---
|
swl-models/CuteYukiMix-v2.0
|
swl-models
| 2023-06-22T14:47:21Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:08Z |
---
license: creativeml-openrail-m
---
|
HoussemMammeri/BERT-V1
|
HoussemMammeri
| 2023-06-22T14:38:27Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T12:25:02Z |
---
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: BERT-V1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93568
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-V1
This model is a fine-tuned version of [robertsamoilescu/movie-sentiment-bert-base-uncased](https://huggingface.co/robertsamoilescu/movie-sentiment-bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3975
- Accuracy: 0.9357
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0862 | 1.0 | 1563 | 0.2823 | 0.9331 |
| 0.0263 | 2.0 | 3126 | 0.3975 | 0.9357 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
evatan/cucumber_w_prior
|
evatan
| 2023-06-22T14:11:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T13:46:00Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cucumber
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/cucumber_w_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cucumber using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
user1251/soccer_finetuned_model2_final2
|
user1251
| 2023-06-22T14:02:55Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T13:45:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final2
This model is a fine-tuned version of [user1251/soccer_finetuned_model2_final1](https://huggingface.co/user1251/soccer_finetuned_model2_final1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8481 | 1.0 | 1232 | 0.7025 |
| 0.7256 | 2.0 | 2464 | 0.6326 |
| 0.6693 | 3.0 | 3696 | 0.6121 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ricklicona/bert-finetuned-ner
|
ricklicona
| 2023-06-22T14:02:41Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-21T14:24:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357722231418639
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.9435032963364767
- name: Accuracy
type: accuracy
value: 0.9867840113027609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0580
- Precision: 0.9358
- Recall: 0.9514
- F1: 0.9435
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0849 | 1.0 | 1756 | 0.0663 | 0.9118 | 0.9355 | 0.9235 | 0.9829 |
| 0.0353 | 2.0 | 3512 | 0.0600 | 0.9277 | 0.9480 | 0.9377 | 0.9859 |
| 0.019 | 3.0 | 5268 | 0.0580 | 0.9358 | 0.9514 | 0.9435 | 0.9868 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.1.0.dev20230616
- Datasets 2.12.0
- Tokenizers 0.13.3
|
guilleguells/cypher-7b-SmallModel
|
guilleguells
| 2023-06-22T13:48:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T13:48:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
user1251/soccer_finetuned_model2_final1
|
user1251
| 2023-06-22T13:38:28Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T13:35:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 1.3764 |
| No log | 2.0 | 250 | 1.1704 |
| No log | 3.0 | 375 | 1.1184 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
t3PbMvBN6SXv/ppo-LunarLander-v2
|
t3PbMvBN6SXv
| 2023-06-22T13:28:52Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T01:33:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.75 +/- 21.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
guilleguells/cypher-7b-BigModel
|
guilleguells
| 2023-06-22T13:24:36Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T13:24:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
jackie68/foodImageToText
|
jackie68
| 2023-06-22T13:13:48Z | 193 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-22T12:29:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: foods
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
config: default
split: train[:5000]
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# foods
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4850
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.215 | 0.99 | 62 | 2.9381 | 0.778 |
| 1.7683 | 2.0 | 125 | 1.6041 | 0.911 |
| 1.2081 | 2.99 | 187 | 1.1491 | 0.894 |
| 0.82 | 4.0 | 250 | 0.9028 | 0.899 |
| 0.7188 | 4.99 | 312 | 0.7217 | 0.913 |
| 0.5186 | 6.0 | 375 | 0.5988 | 0.928 |
| 0.4582 | 6.99 | 437 | 0.5468 | 0.926 |
| 0.4185 | 8.0 | 500 | 0.4943 | 0.93 |
| 0.3909 | 8.99 | 562 | 0.4865 | 0.925 |
| 0.3513 | 9.92 | 620 | 0.4850 | 0.925 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
swl-models/DarkSushiMix-Darker
|
swl-models
| 2023-06-22T13:13:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:11:45Z |
---
license: creativeml-openrail-m
---
|
ighina/roberta_topseg_softmax_mean_wikicity
|
ighina
| 2023-06-22T13:13:27Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-22T13:04:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11254 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
A1abz/ppo-Huggy
|
A1abz
| 2023-06-22T13:08:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:08:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: A1abz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/DarkSushiMix-Colorful
|
swl-models
| 2023-06-22T13:02:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:01:07Z |
---
license: creativeml-openrail-m
---
|
serkanBurakOrs/poca-SoccerTwos
|
serkanBurakOrs
| 2023-06-22T13:00:34Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:38:25Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: serkanBurakOrs/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/DarkSushiMix-2.25D
|
swl-models
| 2023-06-22T12:59:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T12:57:05Z |
---
license: creativeml-openrail-m
---
|
pellucid/translation_model_en_ko
|
pellucid
| 2023-06-22T12:50:59Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus100",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T11:45:57Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- opus100
metrics:
- bleu
model-index:
- name: translation_model_en_ko
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus100
type: opus100
config: en-ko
split: train
args: en-ko
metrics:
- name: Bleu
type: bleu
value: 0.05548080664699851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_model_en_ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on the opus100 dataset.
It achieves the following results on the evaluation set:
- Loss: 7.6056
- Bleu: 0.0555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
gokuls/sa_bert_12_layer_modified_complete_training_48
|
gokuls
| 2023-06-22T12:41:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-20T10:02:27Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7897
- Accuracy: 0.5117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5942 | 0.05 | 10000 | 6.5714 | 0.1229 |
| 6.1563 | 0.11 | 20000 | 6.3437 | 0.1392 |
| 6.1425 | 0.16 | 30000 | 6.2474 | 0.1444 |
| 6.2249 | 0.22 | 40000 | 6.1900 | 0.1468 |
| 6.1498 | 0.27 | 50000 | 6.1482 | 0.1487 |
| 6.0528 | 0.33 | 60000 | 6.1192 | 0.1492 |
| 6.0103 | 0.38 | 70000 | 6.0762 | 0.1504 |
| 5.8523 | 0.44 | 80000 | 5.8731 | 0.1615 |
| 5.91 | 0.49 | 90000 | 5.7442 | 0.1765 |
| 5.4931 | 0.55 | 100000 | 5.5985 | 0.1952 |
| 5.4145 | 0.6 | 110000 | 5.4716 | 0.2100 |
| 5.3729 | 0.66 | 120000 | 5.3366 | 0.2247 |
| 5.2655 | 0.71 | 130000 | 5.1946 | 0.2417 |
| 5.2975 | 0.76 | 140000 | 5.0287 | 0.2600 |
| 4.9997 | 0.82 | 150000 | 4.8593 | 0.2791 |
| 4.831 | 0.87 | 160000 | 4.6226 | 0.3041 |
| 4.9176 | 0.93 | 170000 | 4.4211 | 0.3257 |
| 4.5352 | 0.98 | 180000 | 4.2328 | 0.3429 |
| 4.1536 | 1.04 | 190000 | 4.0635 | 0.3598 |
| 4.0216 | 1.09 | 200000 | 3.9109 | 0.3755 |
| 4.0744 | 1.15 | 210000 | 3.7761 | 0.3897 |
| 3.7468 | 1.2 | 220000 | 3.6636 | 0.4038 |
| 3.5015 | 1.26 | 230000 | 3.5047 | 0.4236 |
| 3.5717 | 1.31 | 240000 | 3.4014 | 0.4370 |
| 3.1969 | 1.37 | 250000 | 3.3173 | 0.4479 |
| 3.5026 | 1.42 | 260000 | 3.2254 | 0.4588 |
| 3.287 | 1.47 | 270000 | 3.1845 | 0.4643 |
| 3.3462 | 1.53 | 280000 | 3.0979 | 0.4738 |
| 3.3996 | 1.58 | 290000 | 3.0808 | 0.4764 |
| 3.2324 | 1.64 | 300000 | 3.0163 | 0.4843 |
| 3.0972 | 1.69 | 310000 | 2.9738 | 0.4890 |
| 3.1621 | 1.75 | 320000 | 2.9450 | 0.4927 |
| 3.0282 | 1.8 | 330000 | 2.9135 | 0.4964 |
| 3.0674 | 1.86 | 340000 | 2.9059 | 0.4979 |
| 2.9437 | 1.91 | 350000 | 2.8810 | 0.5007 |
| 2.8208 | 1.97 | 360000 | 2.8316 | 0.5064 |
| 2.9005 | 2.02 | 370000 | 2.8061 | 0.5098 |
| 2.7574 | 2.08 | 380000 | 2.7897 | 0.5117 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kebei/poca-SoccerTwos
|
kebei
| 2023-06-22T12:35:53Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-22T12:35:46Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kebei/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wordcab/whisper-large-fp16-ru
|
wordcab
| 2023-06-22T12:23:03Z | 3 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T08:02:51Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
wordcab/whisper-large-int8-ru
|
wordcab
| 2023-06-22T12:21:04Z | 3 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T07:37:41Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
PhaniManda/autotrain-test-token-classification-68284137539
|
PhaniManda
| 2023-06-22T12:06:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain",
"en",
"dataset:PhaniManda/autotrain-data-test-token-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-22T12:05:37Z |
---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- PhaniManda/autotrain-data-test-token-classification
co2_eq_emissions:
emissions: 0.09373580480950418
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 68284137539
- CO2 Emissions (in grams): 0.0937
## Validation Metrics
- Loss: 1.234
- Accuracy: 0.579
- Precision: 0.395
- Recall: 0.270
- F1: 0.321
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/PhaniManda/autotrain-test-token-classification-68284137539
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("PhaniManda/autotrain-test-token-classification-68284137539", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("PhaniManda/autotrain-test-token-classification-68284137539", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Tessro/ppo-LunarLander-v
|
Tessro
| 2023-06-22T12:02:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T12:01:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: abhi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.02 +/- 71.97
name: mean_reward
verified: false
---
# **abhi** Agent playing **LunarLander-v2**
This is a trained model of a **abhi** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jondurbin/airoboros-7b-gpt4-1.4-fp16
|
jondurbin
| 2023-06-22T11:51:25Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T10:46:32Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
float16 version of https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4
|
warshakhan/donut-base-HMP
|
warshakhan
| 2023-06-22T11:38:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-06-19T07:46:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-HMP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-HMP
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nathan-cai/Pixelcopter-PLE-v0
|
nathan-cai
| 2023-06-22T11:18:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-19T22:38:35Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 37.30 +/- 28.52
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fireballoon/baichuan-vicuna-chinese-7b-gptq
|
fireballoon
| 2023-06-22T11:03:11Z | 8 | 17 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"zh",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:mhhmm/leetcode-solutions-python",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-06-20T14:55:01Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- QingyiSi/Alpaca-CoT
- mhhmm/leetcode-solutions-python
---
# baichuan-vicuna-chinese-7b-gptq
[baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b) quantized with [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
使用[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)量化的[baichuan-vicuna-chinese-7b](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b)。使用7G显存实现模型推理。
# Quantize config
```
{
"bits": 4,
"group_size": 128,
"damp_percent": 0.01,
"desc_act": true,
"sym": true,
"true_sequential": true,
"model_name_or_path": null,
"model_file_base_name": null
}
```
# Inference
```ipython
>>> from transformers import AutoTokenizer, TextStreamer
>>> from auto_gptq import AutoGPTQForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b-gptq", use_fast=False)
>>> model = AutoGPTQForCausalLM.from_quantized("fireballoon/baichuan-vicuna-chinese-7b-gptq", device="cuda:0")
>>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
>>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
>>> prompt = instruction.format("How can I improve my time management skills?") # user message
>>> generate_ids = model.generate(input_ids=tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer)
'''
To improve your time management skills, you should first identify your priorities and set goals for each day. Then, create a schedule that allows you to allocate time to each task. Finally, prioritize your tasks and stick to your schedule.
'''
```
# Test cases on [Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese)
```
Q: 小明有200元,买书花了66元,买文具花了12元,小明剩下的钱比原来少了多少元?
A: 小明剩下的钱比原来少了66+12=78元。
Q: 请翻译这段英文:Timmy was fined a large sum of money by the judge for drunken driving·
A: Timmy因醉驾被法官处以巨额罚款。
Q: 贝多芬和柴可夫斯基在风格上有什么差异吗
A: 贝多芬和柴可夫斯基都是古典音乐作曲家,但他们的风格在某些方面存在差异。贝多芬以其严肃、庄重、深沉的音乐风格而闻名,他的音乐常常具有内省和情感深度。柴可夫斯基则以其华丽、优雅、精致的音乐风格而闻名,他的音乐常常具有戏剧性和情感深度。贝多芬的音乐通常被描述为古典主义音乐,而柴可夫斯基的音乐则常常被描述为浪漫主义音乐。总的来说,贝多芬的音乐更具内省和情感深度,而柴可夫斯基的音乐则更具华丽和精致。
```
```
Q: 写一篇高中青涩的爱情故事,男主是“我”,女主叫小美。我在高一喜欢上了小美。 开始写吧
A: 我和我的女朋友小美是在高一的时候认识的。那是一个阳光明媚的午后,我正在教室里看书,突然听到有人喊我的名字。我转过头,看到小美站在我的面前,微笑着看着我。那一刻,我感觉整个世界都停止了,所有的目光都聚集在我和小美身上。
从那天起,我就开始关注小美。我发现她喜欢看漫画,我就买了很多漫画书送给她;我发现她喜欢听音乐,我就经常带她去听音乐会。我们经常一起去图书馆,一起去操场散步,一起去吃路边摊。
随着我们的关系越来越亲密,我开始意识到自己已经深深地爱上了小美。我每天都会给她写情书,给她送花,给她惊喜。我总是想方设法地逗她开心,让她笑。
但是小美似乎并不像我一样喜欢我。她经常拒绝我的邀请,对我的感情视而不见。我以为她只是害羞,所以更加努力地对她好,更加频繁地出现在她的面前。
但是无论我怎么努力,小美似乎都无法看到我的好。我以为她只是需要时间来接受我的感情,所以更加耐心地等待着。
但是等待并没有带来预期的结果。小美最终向我提出了分手,并拒绝了我的所有请求。我感到震惊和心痛,不知道为什么会这样。
我以为我们的爱情是坚不可摧的,但是小美却说它不够好。我感到失落和受伤,整整一个星期都没有说话。
但是随着时间的推移,我开始意识到小美的拒绝并不是我的失败,而是她的失败。我已经尽我所能去爱她,去关心她,但是她可能并不适合我。
我开始把注意力放在其他事情上,比如学习、运动和社交。我开始交新朋友,参加新的活动,甚至开始了一次新的冒险。
当我再次回头看时,我发现自己已经不再像以前那样爱小美了。我已经学会了接受她的拒绝,并继续前进。
我不知道未来会发生什么,但我知道一件事:我已经准备好迎接生活,无论它给我带来什么。
```
```
Q: 从零学习编程,请给我一个三个月的学习计划
A: 当然,这里有一个三个月的编程学习计划:
1. 从学习编程的基础知识开始,比如变量和循环。您可以通过阅读在线教程和书籍来完成此操作。
2. 接下来,学习编程的基本语法。这将帮助您编写代码并让其运行。
3. 一旦您对编程的基本语法有了很好的理解,开始学习编程的基本概念,比如条件语句和方法。
4. 接下来,学习编程中使用的不同编程语言,比如Python或Java。
5. 最后,通过完成小型项目来应用所学知识。这将帮助您将所学应用到实际问题中。
6. 定期练习,并尝试不同的编程语言和技术。
7. 不要害怕寻求帮助或向他人寻求反馈。
8. 耐心地尝试,不要害怕犯错。
9. 定期复习和练习,以巩固您的知识。
10. 保持对新技术和编程语言的最新趋势。
记住,学习编程是一个过程,需要时间和练习。坚持不懈,保持好奇心,您将成为一名熟练的编程人员。
```
```
Q: 实现一个Python函数以使用动态编程查找两个输入字符串的最长公共子序列
A: 以下是查找两个输入字符串的最长公共子序列的最简Python函数:
def longest_common_subsets(str1, str2):
m, n = len(str1), len(str2)
i, j = 0, 0
while i < m and j < n:
while str1[i] == str2[j]:
i += 1
j += 1
if i < m:
j += 1
else:
break
return j - i
该函数的工作原理是将两个输入字符串的长度 `m` 和 `n` 作为其变量 `i` 和 `j` 的计算代价来初始化。然后,循环开始从 `i` 和 `j` 两个索引开始。
两个字符串在循环过程中一直被比较,直到 `str1` 字符串中的某个字符不在 `str2` 字符串中。在这种情况下,循环终止,并执行上述操作,从 `i` 更新为输入字符串中较长字符串的索引(上步运行中较短字符串的索引)。
然后,程序检查当前 `i` 索引是否大于 `m`,如果是,则进行动态编程(循环将继续进行`j`次更新,以便使用最长公共子序列。
最后,函数返回`j`-`i`,这给出了输入字符串的最长公共子序列的长度,其中长序列长度`j`减去第一个可以访问的最长公共子序列的长度索引`i`。此值始终大于两个输入字符串的长度总和。
```
|
xzuyn/GPT2-Stable-Diffusion-1.487M-Prompts-Deduped-6.86M
|
xzuyn
| 2023-06-22T10:59:23Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:xzuyn/Stable-Diffusion-Prompts-Deduped-1.487M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-21T08:06:02Z |
---
language:
- en
pipeline_tag: text-generation
datasets:
- xzuyn/Stable-Diffusion-Prompts-Deduped-1.487M
---
# Latest Version: *232,775* Steps
- 14,897,600 tokens seen.
# Model Info:
- Trained from scratch.
- 6.86M parameters.
- 64 context length.
# Config
```
batch_size: 1
dropout: 0
learning_rate: 0.0001
max_length: 64
n_embed: 256
n_head: 8
n_layer: 8
vocab_size: 2048
```
|
vg055/roberta-base-bne-finetuned-Tass2020
|
vg055
| 2023-06-22T10:41:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T10:40:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-bne-finetuned-Tass2020
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-Tass2020
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.951 | 1.0 | 15 | 3.4728 |
| 3.3715 | 2.0 | 30 | 2.9967 |
| 3.131 | 3.0 | 45 | 3.1550 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hbacard/bert-fine-tuned-cola
|
hbacard
| 2023-06-22T10:37:49Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T10:20:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5906590396340186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8240
- Matthews Correlation: 0.5907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4764 | 1.0 | 1069 | 0.5198 | 0.4949 |
| 0.3207 | 2.0 | 2138 | 0.6520 | 0.5757 |
| 0.1841 | 3.0 | 3207 | 0.8240 | 0.5907 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ighina/roberta_topseg_softmax_mean_wikidisease
|
ighina
| 2023-06-22T10:34:17Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-22T10:23:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2161 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bandrocks/my_awesome_eminem_clm-model
|
bandrocks
| 2023-06-22T10:34:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T08:53:40Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eminem_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eminem_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.4213 | 1.0 | 3840 | 1.3670 |
| 1.3347 | 2.0 | 7680 | 1.3081 |
| 1.3152 | 3.0 | 11520 | 1.2919 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ketankishore/finetune_llm_falcon7b
|
ketankishore
| 2023-06-22T10:25:10Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T10:25:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
A1abz/ppo-LunarLander-v2
|
A1abz
| 2023-06-22T10:22:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:22:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.90 +/- 20.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jnwprk/hate_detection_model
|
jnwprk
| 2023-06-22T10:18:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:42:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hate_detection_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_detection_model
This model is a fine-tuned version of [sangrimlee/bert-base-multilingual-cased-nsmc](https://huggingface.co/sangrimlee/bert-base-multilingual-cased-nsmc) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2937
- Accuracy: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 62 | 0.4613 | 0.7834 |
| No log | 2.0 | 124 | 0.5033 | 0.7516 |
| No log | 3.0 | 186 | 0.4699 | 0.7898 |
| No log | 4.0 | 248 | 0.5516 | 0.7516 |
| No log | 5.0 | 310 | 0.6990 | 0.7219 |
| No log | 6.0 | 372 | 0.6500 | 0.7665 |
| No log | 7.0 | 434 | 0.7347 | 0.7856 |
| No log | 8.0 | 496 | 0.9104 | 0.7389 |
| 0.3218 | 9.0 | 558 | 0.7689 | 0.8153 |
| 0.3218 | 10.0 | 620 | 0.9496 | 0.7792 |
| 0.3218 | 11.0 | 682 | 0.9598 | 0.7707 |
| 0.3218 | 12.0 | 744 | 1.2402 | 0.7091 |
| 0.3218 | 13.0 | 806 | 1.1616 | 0.7537 |
| 0.3218 | 14.0 | 868 | 1.0903 | 0.7771 |
| 0.3218 | 15.0 | 930 | 1.3674 | 0.7304 |
| 0.3218 | 16.0 | 992 | 1.1962 | 0.7728 |
| 0.0623 | 17.0 | 1054 | 1.3640 | 0.7452 |
| 0.0623 | 18.0 | 1116 | 1.3093 | 0.7622 |
| 0.0623 | 19.0 | 1178 | 1.3108 | 0.7707 |
| 0.0623 | 20.0 | 1240 | 1.2937 | 0.7686 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
getrajeev03/bart-large-cnn-samsum
|
getrajeev03
| 2023-06-22T10:14:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T08:39:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 40.1703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4821
- Rouge1: 40.1703
- Rouge2: 20.2613
- Rougel: 30.8068
- Rougelsum: 37.4968
- Gen Len: 60.0366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.1917 | 1.0 | 7366 | 1.4821 | 40.1703 | 20.2613 | 30.8068 | 37.4968 | 60.0366 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.0
- Tokenizers 0.11.0
|
tux/rl_course_vizdoom_health_gathering_supreme
|
tux
| 2023-06-22T10:06:11Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:44:37Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.54 +/- 4.98
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r tux/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
hts98/whisper-large-v2-paper_
|
hts98
| 2023-06-22T10:02:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T06:34:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-paper_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-paper_
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 47.7467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 143 | 0.3626 | 71.8596 |
| No log | 2.0 | 286 | 0.3398 | 50.4925 |
| No log | 3.0 | 429 | 0.3426 | 52.2600 |
| 0.3684 | 4.0 | 572 | 0.3541 | 46.2800 |
| 0.3684 | 5.0 | 715 | 0.3721 | 46.6114 |
| 0.3684 | 6.0 | 858 | 0.4133 | 47.7467 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
ammaradel/PSU-LLaMA-Inference
|
ammaradel
| 2023-06-22T09:59:05Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"pytorch",
"question-answering",
"en",
"region:us"
] |
question-answering
| 2023-06-15T07:55:31Z |
---
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
pipeline_tag: question-answering
language:
- en
library_name: adapter-transformers
---
LLaMA model fine-tuned on PSU Dataset 3 (8K lines)
|
nolanspecter/ppo-Huggy
|
nolanspecter
| 2023-06-22T09:44:55Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:44:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nolanspecter/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
slimsha2dy/my_awesome_model
|
slimsha2dy
| 2023-06-22T09:18:32Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:07:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: test
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1855
- Accuracy: 0.925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2413 | 1.0 | 1000 | 0.2020 | 0.922 |
| 0.1451 | 2.0 | 2000 | 0.1855 | 0.925 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chansung/gpt4-alpaca-lora-13b-decapoda-1024
|
chansung
| 2023-06-22T09:16:45Z | 0 | 4 | null |
[
"safetensors",
"alpaca",
"llama",
"chat",
"gpt4",
"text2text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-04-17T01:27:23Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
- gpt4
---
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system. In this fine-tuning process, the length of tokens is set to `1024` since GPT tends to generates long texts
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
--base_model='decapoda-research/llama-13b-hf' \
--data_path='alpaca_data_gpt4.json' \
--num_epochs=10 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./gpt4-alpaca-lora-13b' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--batch_size=... \
--micro_batch_size=...
```
You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/s5xxpdal?workspace=user-chansung18).
## Example output
1. Example #1
```
Instruction: List all Canadian provinces in alphabetical order.
Response: List of Canadian provinces in alphabetical order:
Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Northwest Territories, Nova Scotia, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan.
Instruction: Which ones are on the east side?
Response: Eastern provinces of Canada are: Newfoundland and Labrador, Nova Scotia, Prince Edward Island, New Brunswick.
Instruction: What foods are famous in each province on the east side?
Response: Newfoundland and Labrador: Jiggs Dinner, Screech, Partridgeberry Wine, Flipper Pie, Nanny’s Cookies. Nova Scotia: Donair, Pizza Burger, Apple Pie, Halifax Donair. New Brunswick:
Fried Dough (Malpeque Doughnuts), Poutine, Acadian Foods (Cote-de-beouf, potato chips, cabbage rolls, pickled beets). Prince Edward Island: Coleslaw, Lobster, Potato Bread, Blueberry Pies, Butter Tarts.
Instruction: What about sightseeing? or landmarks? list one per province
Response: Newfoundland and Labrador: Cabot Tower (St. John’s), Terra Nova National Park, Gros Morne National Park, Signal Hill. Nova Scotia: Cabot Trail, Peggy’s Cove, Lighthouses, Halifax Waterfront. New Brunswick: Fundy National Park, Bay of Fundy Tides,
Loyalist House, Hopewell Rocks. Prince Edward Island: Green Gables Shore, Cavendish National Park, Anne of Green Gables House.
```
2. Example #2
```
Instruction: Tell me about alpacas.
Response: Alpacas are a type of llama that live in the Andes mountains in South America. They are native to Peru, Chile, Bolivia, and Ecuador, where they are valued for their fiber. Alpacas are related to camels and they are cousins of the llama. Alpacas have woolly fur that comes in many colors and they are very gentle creatures. They are considered the smaller cousins of the llama and they are about the size of a medium dog. Alpacas have long necks and long, lush fur around their necks, which is known as their “ruff.” They also have long lashes that help keep dust and debris from their eyes. Alpacas have hooves that help them move easily through the rocky terrain of the Andes mountains. They make a soft “humbing” sound when they are content. Alpacas live in herds, and females are called “huacas,” and males are called “chancos.” Alpacas can live for up to 20 years. Alpacas can come in a variety of colors, including white, black, brown,
Instruction: What other animals are living in the same area?
Response: Alpacas live in the Andes mountains, and in that region there are many other animals. There are condors, vicunas, pumas, rheas, foxes, chinchillas, and more.
Instruction: Are they the same species?
Response: Alpacas and llamas are both members of the camelid family, but there are differences. Alpacas have longer necks, smaller ears, and shorter, thicker wool. Llamas are larger and their wool is coarser.
```
3. Example #3
```
Instruction: Tell me about the king of France in 2019.
Response: In 2019, King of France is not applicable, as France does not have a monarch. It has been a republic since the 1790s and its constitution prohibits the monarchy from being reinstated. The President of France is currently Emmanuel Macron.
Instruction: What about before him?
Response: Before Emmanuel Macron, there were 5 other presidents: Nicolas Sarkozy, Jacques Chirac, François Mitterrand, Valéry Giscard d’Estaing, and Georges Pompidou.
```
4. Example #4
```
Instruction: Write a Python program that prints the first 10 Fibonacci numbers.
Response:
def fibonacci_numbers():
a, b = 0, 1
for i in range(10):
a, b = b, a + b
print(a, b)
fibonacci_numbers()
Output: 0 1 1 2 3 5 8 13 21 34
Instruction: Could you explain how the code works?
Response: First, the function fibonacci_numbers() is defined. The range function creates a range of numbers, in this case between 0 and 10.
Inside the loop, we initialize two variables a and b to 0 and 1 respectively. a, b represent the previous and current Fibonacci numbers respectively. For each number in the range, we add the previous Fibonacci number b to the current one a.
At the end of the loop, the a and b variables will hold the Fibonacci numbers from 0 to 10. This is then printed on the screen.
Instruction: What is recursion?
Response: Recursion is a way of defining a method in which the method calls itself until a certain condition is met. In Python, recursion is used when there is a task that can be broken into smaller tasks that can be done recursively. A recursive function can be defined as any function that calls itself in the body of the function.
```
|
rudzhehdehd/Love_Letter
|
rudzhehdehd
| 2023-06-22T09:13:17Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:07:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: Love_Letter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Love_Letter
This model is a fine-tuned version of [EasthShin/BTS_Lyrics_GPT-Neo-base](https://huggingface.co/EasthShin/BTS_Lyrics_GPT-Neo-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 1.2588 |
| No log | 2.0 | 400 | 1.1366 |
| 1.3097 | 3.0 | 600 | 1.1046 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hangjoni/ppo-LunarLander-v2
|
hangjoni
| 2023-06-22T09:13:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:11:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.66 +/- 16.73
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Naonori/billsum_model_for_test
|
Naonori
| 2023-06-22T09:03:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T09:01:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum_model_for_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model_for_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4420
- Rouge1: 0.1461
- Rouge2: 0.0524
- Rougel: 0.121
- Rougelsum: 0.121
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7503 | 0.1244 | 0.035 | 0.105 | 0.1052 | 19.0 |
| No log | 2.0 | 124 | 2.5250 | 0.1361 | 0.0455 | 0.1141 | 0.1144 | 19.0 |
| No log | 3.0 | 186 | 2.4594 | 0.1459 | 0.0523 | 0.1202 | 0.1202 | 19.0 |
| No log | 4.0 | 248 | 2.4420 | 0.1461 | 0.0524 | 0.121 | 0.121 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.