modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
gvij/gpt-j-6B-alpaca-gpt4
|
gvij
| 2023-06-22T20:51:02Z | 5 | 0 |
peft
|
[
"peft",
"alpaca",
"gpt4",
"gpt-j",
"instruction",
"finetuning",
"lora",
"conversational",
"dataset:vicgalle/alpaca-gpt4",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-06-22T16:10:28Z |
---
license: apache-2.0
datasets:
- vicgalle/alpaca-gpt4
pipeline_tag: conversational
tags:
- alpaca
- gpt4
- gpt-j
- instruction
- finetuning
- lora
- peft
---
GPT-J 6B model was finetuned on GPT-4 generations of the Alpaca prompts on [MonsterAPI](https://monsterapi.ai)'s no-code LLM finetuner, using LoRA for ~ 65,000 steps, auto-optmised to run on 1 A6000 GPU with no out of memory issues and without needing me to write any code or setup a GPU server with libraries to run this experiment. The finetuner does it all for us by itself.
Documentation on no-code LLM finetuner:
https://docs.monsterapi.ai/fine-tune-a-large-language-model-llm

---
license: apache-2.0
---
|
ParisNeo/lollms-personalities-zoo
|
ParisNeo
| 2023-06-22T20:44:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-21T14:22:50Z |
# lollms_personalities_zoo
Lord of LLMS personalities zoo
|
christinacdl/clickbait_binary_detection
|
christinacdl
| 2023-06-22T20:44:50Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:christinacdl/clickbait_notclickbait_dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T14:56:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: clickbait_binary_detection
results: []
datasets:
- christinacdl/clickbait_notclickbait_dataset
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clickbait_binary_detection
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4630
- Macro F1: 0.9155
- Micro F1: 0.9215
- Accuracy: 0.9215
Performance on test set:
- Accuracy: 0.9257990867579908
- F1 score: 0.9199282431058413
- Precision: 0.9233793490724882
- Recall : 0.9168756883647268
- Matthews Correlation Coefficient: 0.8402298675576902
- Precision of each class: [0.931899 0.91485969]
- Recall of each class: [0.95152505 0.88222632]
- F1 score of each class: [0.94160977 0.89824671]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Micro F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:--------:|
| 0.2296 | 1.0 | 3650 | 0.2236 | 0.9105 | 0.9183 | 0.9183 |
| 0.228 | 2.0 | 7301 | 0.2708 | 0.9115 | 0.9192 | 0.9192 |
| 0.2075 | 3.0 | 10951 | 0.3141 | 0.9164 | 0.9224 | 0.9224 |
| 0.1881 | 4.0 | 14602 | 0.3211 | 0.9143 | 0.9201 | 0.9201 |
| 0.18 | 5.0 | 18252 | 0.3852 | 0.9130 | 0.9188 | 0.9188 |
| 0.1818 | 6.0 | 21903 | 0.3784 | 0.9110 | 0.9174 | 0.9174 |
| 0.1495 | 7.0 | 25553 | 0.4606 | 0.9106 | 0.9156 | 0.9156 |
| 0.1453 | 8.0 | 29204 | 0.4630 | 0.9155 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
|
sheshenin/vikash3
|
sheshenin
| 2023-06-22T20:40:31Z | 7 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T20:36:53Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### VikaSH3 Dreambooth model trained by sheshenin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
SALT-NLP/FLANG-Roberta
|
SALT-NLP
| 2023-06-22T20:31:41Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Financial Language Modelling",
"financial-sentiment-analysis",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-03T16:16:10Z |
---
language: en
tags:
- Financial Language Modelling
- financial-sentiment-analysis
widget:
- text: Stocks rallied and the British pound <mask>.
---
## Dataset Summary
- **Homepage:** https://salt-nlp.github.io/FLANG/
- **Models:** https://huggingface.co/SALT-NLP/FLANG-BERT
- **Repository:** https://github.com/SALT-NLP/FLANG
## FLANG
FLANG is a set of large language models for Financial LANGuage tasks. These models use domain specific pre-training with preferential masking to build more robust representations for the domain. The models in the set are:\
[FLANG-BERT](https://huggingface.co/SALT-NLP/FLANG-BERT)\
[FLANG-SpanBERT](https://huggingface.co/SALT-NLP/FLANG-SpanBERT)\
[FLANG-DistilBERT](https://huggingface.co/SALT-NLP/FLANG-DistilBERT)\
[FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta)\
[FLANG-ELECTRA](https://huggingface.co/SALT-NLP/FLANG-ELECTRA)
## FLANG-Roberta
FLANG-Roberta is a pre-trained language model which uses financial keywords and phrases for preferential masking of domain specific terms. It is built by further training the RoBerta language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary.
## FLUE
FLUE (Financial Language Understanding Evaluation) is a comprehensive and heterogeneous benchmark that has been built from 5 diverse financial domain specific datasets.
Sentiment Classification: [Financial PhraseBank](https://huggingface.co/datasets/financial_phrasebank)\
Sentiment Analysis, Question Answering: [FiQA 2018](https://huggingface.co/datasets/SALT-NLP/FLUE-FiQA)\
New Headlines Classification: [Headlines](https://www.kaggle.com/datasets/daittan/gold-commodity-news-and-dimensions)\
Named Entity Recognition: [NER](https://huggingface.co/datasets/SALT-NLP/FLUE-NER)\
Structure Boundary Detection: [FinSBD3](https://sites.google.com/nlg.csie.ntu.edu.tw/finweb2021/shared-task-finsbd-3)
## Citation
Please cite the model with the following citation:
```bibtex
@INPROCEEDINGS{shah-etal-2022-flang,
author = {Shah, Raj Sanjay and
Chawla, Kunal and
Eidnani, Dheeraj and
Shah, Agam and
Du, Wendi and
Chava, Sudheer and
Raman, Natraj and
Smiley, Charese and
Chen, Jiaao and
Yang, Diyi },
title = {When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain},
booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
year = {2022},
publisher = {Association for Computational Linguistics}
}
```
## Contact information
Please contact Raj Sanjay Shah (rajsanjayshah[at]gatech[dot]edu) or Sudheer Chava (schava6[at]gatech[dot]edu) or Diyi Yang (diyiy[at]stanford[dot]edu) about any FLANG-Roberta related issues and questions.
---
license: afl-3.0
---
|
serpapi/bert-base-local-results
|
serpapi
| 2023-06-22T20:16:07Z | 115 | 6 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"scraping",
"parsing",
"serp",
"api",
"opensource",
"en",
"dataset:serpapi/local-results-en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T21:53:30Z |
---
language:
- en
pipeline_tag: text-classification
widget:
- title: Rating Example
text: '4.7'
- title: Reviews Example
text: (188)
- title: Reviews Example 2
text: '188'
- title: Reviews Example 3
text: No Reviews
- title: Price Example
text: $
- title: Type Example
text: Coffee shop
- title: Address Example
text: Frederick, MD
- title: Address Example 2
text: 552 W 48th St
- title: Address Example 3
text: In Hilton Hotel
- title: Hours Example
text: Closed
- title: Hours Example 2
text: Opens 7 AM Fri
- title: Hours Example 3
text: Permanently closed
- title: Service Option Example
text: Dine-in
- title: Service Option Example 2
text: Takeout
- title: Service Option Example 3
text: Delivery
- title: Phone Example
text: (301) 000-0000
- title: Years In Business Example
text: 5+ Years in Business
- title: Button Text Example
text: Directions
- title: Description Example
text: 'Provides: Auto maintenance'
license: mit
datasets:
- serpapi/local-results-en
tags:
- scraping
- parsing
- serp
- api
- opensource
---
<h1 align="center">BERT-Based Classification Model for Google Local Listings</h1>
<p align="center">
<img src="https://camo.githubusercontent.com/6c920f0b551360ca3257308e0f3547fe538496b9cb332d6a208992030abf6c3d/68747470733a2f2f736572706170692e636f6d2f616e64726f69642d6368726f6d652d353132783531322e706e67" alt="The Logo of SerpApi" width="200" height="200">
</p>
<p align="center">
This repository contains a BERT-based classification model developed using the Hugging Face library, and a dataset gathered by <a href='https://serpapi.com/google-local-api'>SerpApi's Google Local API</a>. The model is designed to classify different texts extracted from Google Local Listings.
</p>
<p align="center">
You may check out the blog post explaining the model's usecase with an example: <a href="https://serpapi.com/blog/real-world-example-of-ai-powered-parsing/">Real World Example of AI Powered Parsing</a>.
</p>
<p align="center">
You may also check out the Open Source Github Repository that contains the source code of a Ruby Gem called <a href="https://github.com/serpapi/google-local-results-ai-parser">`google-local-results-ai-parser`</a>.
</p>
---
<h2 align="center">Usage and Classification for Parsing</h2>
<p align="center">
The example code below represents using it Python with Inference API for prototyping. You may use different programming languages for calling the results, and you may parallelize your work. Prototyping endpoint will have limited amount of calls. For <code>Production Purposes</code> or <code>Large Prototyping Activities</code>, consider setting an <code>Inference API Endpoint from Huggingface</code>, or a <code>Private API Server</code> for serving the model.
</p>
```py
API_URL = "https://api-inference.huggingface.co/models/serpapi/bert-base-local-results"
headers = {"Authorization": "Bearer xxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "5540 N Lamar Blvd #12, Austin, TX 78756, United States",
})
```
```
Output: address
```
---
<h2 align="center">Strong Features</h2>
<div align="center">
<p>The BERT-based model excels in the following areas:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Differentiating difficult semantic similarities with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"No Reviews"</code> → <code>reviews</code></li>
<li><code>"(5K+)"</code> → <code>reviews</code></li>
</ul>
</li>
<li><strong>Handling partial texts that can be combined later</strong>
<ul style="list-style-type: disc;">
<li><code>"Open ⋅ Closes 5 pm"</code>
<ul style="list-style-type: circle;">
<li><code>"Open"</code> → <code>hours</code></li>
<li><code>"Closes 5 pm"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li><strong>Handling Vocabulary from diverse areas with ease</strong>
<ul style="list-style-type: disc;">
<li><code>"Doctor"</code> → <code>type</code></li>
<li><code>"Restaurant"</code> → <code>type</code></li>
</ul>
</li>
<li><strong>Returning Assurance Score for After-Correction</strong>
<ul style="list-style-type: disc;">
<li><code>"4.7"</code> → <code>rating(0.999)</code></li>
</ul>
</li>
<li><strong>Strong Against Grammatical Mistakes</strong>
<ul style="list-style-type: disc;">
<li><code>"Krebside Pickup"</code> → <code>service options</code></li>
</ul>
</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Parts Covered and Corresponding Keys in SerpApi Parsers</h2>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li><strong>Type of Place:</strong> <code>type</code></li>
<li><strong>Number of Reviews:</strong> <code>reviews</code></li>
<li><strong>Phone Number:</strong> <code>phone</code></li>
<li><strong>Rating:</strong> <code>rating</code></li>
<li><strong>Address:</strong> <code>address</code></li>
<li><strong>Operating Hours:</strong> <code>hours</code></li>
<li><strong>Description or Descriptive Review:</strong> <code>description</code></li>
<li><strong>Expensiveness:</strong> <code>expensiveness</code></li>
<li><strong>Service Options:</strong> <code>service options</code></li>
<li><strong>Button Text:</strong> <code>links</code></li>
<li><strong>Years in Business:</strong> <code>years_in_business</code></li>
</ul>
</div>
</div>
</ul>
</div>
<p align="center">
Please refer to the documentation of SerpApi's Google Local API and Google Local Pack API for more details on different parts:
</p>
<div align="center">
<strong>References:</strong>
<ul style="text-align: center; list-style-position: inside;">
<li>SerpApi's Google Local API: <a href ="https://serpapi.com/google-local-api">https://serpapi.com/google-local-api</a></li>
<li>SerpApi's Google Local Pack API: <a href="https://serpapi.com/local-pack">https://serpapi.com/local-pack</a></li>
</ul>
</div>
---
<h2 align="center">Known Limitations</h2>
<div align="center">
<p>The model has a few limitations that should be taken into account:</p>
<div style="display: flex; justify-content: center;">
<div style="text-align: left;">
<ul style="list-style-position: inside;">
<li>The model does not classify the title of a place. This is because the title often contains many elements that can be easily confused with other parts, even for a human eye.</li>
<li>The <code>label</code> key is not covered by the model, as it can be easily handled with traditional code.</li>
<li>In some cases, <code>button text</code> could be classified as <code>service options</code> or <code>address</code>. However, this can be easily avoided by checking if a text is in a button in the traditional part of the code. The button text is only used to prevent emergent cases.
<ul style="list-style-type: circle">
<li><code>"Delivery"</code> → <code>service options [Correct Label is button text]</code></li>
<li><code>"Share"</code> → <code>address [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify a portion of the <code>description</code> as <code>hours</code> if the description is about operating hours. For example:
<ul style="list-style-type: disc;">
<li><code>"Drive through: Open ⋅ Closes 12 AM"</code>
<ul style="list-style-type: circle">
<li><code>"Drive through: Open"</code> → <code>description</code></li>
<li><code>"Closes 12 AM"</code> → <code>hours</code></li>
</ul>
</li>
</ul>
</li>
<li>In some cases, the model may classify some <code>description</code> as <code>type</code>. This is because some <code>description</code> do look like <code>type</code>. For Example:
<ul style="list-style-type: circle">
<li><code>"Iconic Seattle-based coffeehouse chain"</code> → <code>type [Correct Label is description]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>rating</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Expand more"</code> → <code>hours [Correct Label is button text]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>type</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"Takeaway"</code> → <code>type [Correct Label is service options]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>reviews</code> as <code>hours</code> or <code>price</code>. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
<ul style="list-style-type: circle">
<li><code>"(1.4K)"</code> → <code>rating [Correct Label is reviews]</code></li>
<li><code>"(1.6K)"</code> → <code>price [Correct Label is reviews]</code></li>
</ul>
</li>
<li>In some cases, the model may classify some <code>service options</code> as <code>description</code> or <code>type</code>. The reason for the confusion on <code>description</code> is because of a recent change in their categorization in SerpApi keys. The data contains labels prior to that. For Example:
<ul style="list-style-type: circle">
<li><code>"On-site services"</code> → <code>type [Correct Label is service options]</code></li>
<li><code>"Online appointments"</code> → <code>description [Correct Label is service options]</code></li>
</ul>
</li>
<li>The model may be susceptible to error in one word entries. This is a minority of the cases, and it could be fixed with assurance scores. For Example:
<ul style="list-style-type: circle">
<li><code>"Sushi"</code> → <code>address(0.984), type(0.0493) [Correct Label is type]</code></li>
<li><code>"Diagorou 4"</code> → <code>address(0.999) [Correct address in same listing]</code></li>
</ul>
</li>
<li>The model cannot differentiate between extra parts that are extracted in SerpApi's Google Local API and Google Local Pack API. These parts are not feasible to extract via Classification Models.</li>
<li>The model is not designed for Listings outside English Language.</li>
</ul>
</div>
</div>
</div>
---
<h2 align="center">Disclaimer</h2>
<p align="center">We value full transparency and painful honesty both in our internal and external communications. We believe a world with complete and open transparency is a better world.</p>
<p align="center">
However, while we strive for transparency, there are certain situations where sharing specific datasets may not be feasible or advisable. In the case of the dataset used to train our model, which contains different parts of a Google Local Listing including addresses and phone numbers, we have made a careful decision not to share it. We prioritize the well-being and safety of individuals, and sharing this dataset could potentially cause harm to people whose personal information is included.
</p>
<p align="center">
Protecting the privacy and security of individuals is of utmost importance to us. Disclosing personal information, such as addresses and phone numbers, without proper consent or safeguards could lead to privacy violations, identity theft, harassment, or other forms of misuse. Our commitment to responsible data usage means that we handle sensitive information with great care and take appropriate measures to ensure its protection.
</p>
<p align="center">
While we understand the value of transparency, we also recognize the need to strike a balance between transparency and safeguarding individuals' privacy and security. In this particular case, the potential harm that could result from sharing the dataset outweighs the benefits of complete transparency. By prioritizing privacy, we aim to create a safer and more secure environment for all individuals involved.
</p>
<p align="center">
We appreciate your understanding and support in our commitment to responsible and ethical data practices. If you have any further questions or concerns, please feel free to reach out to us.
</p>
|
chrismwiggs/TEST-PT
|
chrismwiggs
| 2023-06-22T20:02:18Z | 0 | 0 |
nemo
|
[
"nemo",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T20:01:34Z |
---
license: apache-2.0
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
language:
- en
metrics:
- accuracy
library_name: nemo
---
|
brotSchimmelt/falcon-7b-instruct-reddit-cmv-train-on-val
|
brotSchimmelt
| 2023-06-22T19:48:44Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T19:48:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
catrabbitbear/Reinforce-cartpole-1
|
catrabbitbear
| 2023-06-22T19:40:18Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:40:09Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 451.60 +/- 145.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sid/q-FrozenLake-v1-4x4-noSlippery
|
sid
| 2023-06-22T19:36:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:36:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sid/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bonzo1971/roberta-base-bne-finetuned-amazon_reviews_multi
|
bonzo1971
| 2023-06-22T19:20:38Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T18:59:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
config: es
split: validation
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1943 | 1.0 | 1250 | 0.1669 | 0.9327 |
| 0.0982 | 2.0 | 2500 | 0.2219 | 0.9333 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fx1H/Reinforce_Agent_Playing_Pixelcopter-PLE-v0
|
fx1H
| 2023-06-22T19:09:16Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T19:08:27Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_Playing_Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 6.43 +/- 8.32
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JoaoYukio/ppo-Huggy
|
JoaoYukio
| 2023-06-22T19:09:04Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:59:12Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JoaoYukio/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gosorio/IMDB_HF-Tutorial
|
gosorio
| 2023-06-22T18:50:04Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T17:16:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_HF-Tutorial
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_HF-Tutorial
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.9316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2314 | 1.0 | 1563 | 0.1846 | 0.9301 |
| 0.1483 | 2.0 | 3126 | 0.2337 | 0.9316 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
valerio-unifei/ppo-Huggy
|
valerio-unifei
| 2023-06-22T18:44:53Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:44:46Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: valerio-unifei/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
webstels/nekta_help_tc
|
webstels
| 2023-06-22T18:42:00Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-16T13:23:52Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: nekta_help_tc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nekta_help_tc
This model is a fine-tuned version of [webstels/nekta_help_tc](https://huggingface.co/webstels/nekta_help_tc) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0145
- Accuracy: 0.9933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 341 | 0.7823 | 0.7767 |
| 1.61 | 2.0 | 682 | 0.5028 | 0.8367 |
| 0.6434 | 3.0 | 1023 | 0.3594 | 0.8667 |
| 0.6434 | 4.0 | 1364 | 0.2428 | 0.9133 |
| 0.3982 | 5.0 | 1705 | 0.1740 | 0.94 |
| 0.2816 | 6.0 | 2046 | 0.1388 | 0.9367 |
| 0.2816 | 7.0 | 2387 | 0.0960 | 0.97 |
| 0.1886 | 8.0 | 2728 | 0.0430 | 0.99 |
| 0.1388 | 9.0 | 3069 | 0.0490 | 0.9833 |
| 0.1388 | 10.0 | 3410 | 0.0332 | 0.9867 |
| 0.1009 | 11.0 | 3751 | 0.0222 | 0.9933 |
| 0.0718 | 12.0 | 4092 | 0.0253 | 0.9867 |
| 0.0718 | 13.0 | 4433 | 0.0156 | 0.9933 |
| 0.0572 | 14.0 | 4774 | 0.0162 | 0.9967 |
| 0.0476 | 15.0 | 5115 | 0.0211 | 0.9933 |
| 0.0476 | 16.0 | 5456 | 0.0135 | 0.9933 |
| 0.0369 | 17.0 | 5797 | 0.0125 | 0.9967 |
| 0.0309 | 18.0 | 6138 | 0.0206 | 0.9933 |
| 0.0309 | 19.0 | 6479 | 0.0159 | 0.9933 |
| 0.0248 | 20.0 | 6820 | 0.0145 | 0.9933 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
rightspeed/ppo-LunarLander-v2
|
rightspeed
| 2023-06-22T18:35:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:34:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.46 +/- 21.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fx1H/Reinforce_Agent_Playing-CartPole-v1
|
fx1H
| 2023-06-22T18:32:33Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T18:32:26Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_Playing-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 193.10 +/- 21.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tlsalfm820/wav2vec2-base-librispeech32
|
tlsalfm820
| 2023-06-22T18:27:04Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T04:32:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-base-librispeech32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-librispeech32
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3137
- Wer: 0.1101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6899 | 2.07 | 500 | 2.7071 | 0.9991 |
| 0.5624 | 4.13 | 1000 | 0.4322 | 0.2304 |
| 0.1855 | 6.2 | 1500 | 0.4234 | 0.2023 |
| 0.1224 | 8.26 | 2000 | 0.4044 | 0.1852 |
| 0.0928 | 10.33 | 2500 | 0.4644 | 0.2213 |
| 0.0766 | 12.4 | 3000 | 0.3669 | 0.1459 |
| 0.0655 | 14.46 | 3500 | 0.3215 | 0.1414 |
| 0.0544 | 16.53 | 4000 | 0.3524 | 0.1292 |
| 0.0475 | 18.6 | 4500 | 0.4299 | 0.1818 |
| 0.0405 | 20.66 | 5000 | 0.3026 | 0.1226 |
| 0.0361 | 22.73 | 5500 | 0.3132 | 0.1206 |
| 0.0329 | 24.79 | 6000 | 0.3409 | 0.1086 |
| 0.03 | 26.86 | 6500 | 0.3183 | 0.1099 |
| 0.0276 | 28.93 | 7000 | 0.3137 | 0.1101 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
houssamb48/SynthoMindAI
|
houssamb48
| 2023-06-22T18:05:27Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T18:05:27Z |
---
license: creativeml-openrail-m
---
|
Blackroot/airoboros-7B-gpt4-1.4-half-wanda
|
Blackroot
| 2023-06-22T17:56:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-22T17:24:35Z |
2:4 Pruned wanda, wikitext perplexity evaluates to about ~11.4 against the base model's ~6.2
|
bluemoonwj/movie_title_predictor
|
bluemoonwj
| 2023-06-22T17:53:17Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T16:58:53Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: movie_title_predictor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# movie_title_predictor
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0373 | 1.0 | 821 | 1.7633 |
| 1.7272 | 2.0 | 1642 | 1.6852 |
| 1.6767 | 3.0 | 2463 | 1.6553 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
zslrmhb/SpaceInvadersNoFrameskip-v4
|
zslrmhb
| 2023-06-22T17:48:31Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T16:30:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 703.00 +/- 168.87
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zslrmhb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zslrmhb -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zslrmhb
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Lilpopit/privet
|
Lilpopit
| 2023-06-22T17:42:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-22T17:35:31Z |
import requests
API_URL = "https://api-inference.huggingface.co/models/Falon/ayaka-db"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
# You can access the image with PIL.Image for example
import io
from PIL import Image
image = Image.open(io.BytesIO(image_bytes))
|
yanaayanaayanaa/siplora
|
yanaayanaayanaa
| 2023-06-22T17:41:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T17:36:16Z |
---
license: creativeml-openrail-m
---
|
mariololo/ppo-Huggy
|
mariololo
| 2023-06-22T17:39:32Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T17:39:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mariololo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AustinCarthy/Baseline_100Kphish_benignFall_9.5_20_20
|
AustinCarthy
| 2023-06-22T17:36:12Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T11:56:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_100Kphish_benignFall_9.5_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_100Kphish_benignFall_9.5_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Train benign: Fall,Test Benign: Fall, Train phish: Fall, Test phish: Fall, generated url dataset: generated_phish_MixGPT2V2_using_phish_95K_top_p_0.75suffix dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0498
- Accuracy: 0.9974
- F1: 0.9720
- Precision: 0.9987
- Recall: 0.9466
- Roc Auc Score: 0.9733
- Tpr At Fpr 0.01: 0.953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0142 | 1.0 | 16407 | 0.0389 | 0.9974 | 0.9719 | 0.9958 | 0.9492 | 0.9745 | 0.9348 |
| 0.0111 | 2.0 | 32814 | 0.0376 | 0.9977 | 0.9751 | 0.9975 | 0.9536 | 0.9767 | 0.951 |
| 0.0022 | 3.0 | 49221 | 0.0328 | 0.9981 | 0.9794 | 0.9961 | 0.9632 | 0.9815 | 0.9512 |
| 0.0 | 4.0 | 65628 | 0.0438 | 0.9977 | 0.9758 | 0.9985 | 0.954 | 0.9770 | 0.9566 |
| 0.0005 | 5.0 | 82035 | 0.0498 | 0.9974 | 0.9720 | 0.9987 | 0.9466 | 0.9733 | 0.953 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
|
gokuls
| 2023-06-22T17:15:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-20T09:59:23Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_wt_init_48_frz
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4340
- Accuracy: 0.5488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.8468 | 0.08 | 10000 | 3.6051 | 0.4101 |
| 3.6009 | 0.16 | 20000 | 3.3734 | 0.4369 |
| 3.4559 | 0.25 | 30000 | 3.2348 | 0.4517 |
| 3.3578 | 0.33 | 40000 | 3.1395 | 0.4623 |
| 3.2803 | 0.41 | 50000 | 3.0632 | 0.4709 |
| 3.2157 | 0.49 | 60000 | 3.0010 | 0.4780 |
| 3.1503 | 0.57 | 70000 | 2.9554 | 0.4838 |
| 3.1044 | 0.66 | 80000 | 2.9104 | 0.4888 |
| 3.0703 | 0.74 | 90000 | 2.8759 | 0.4931 |
| 3.029 | 0.82 | 100000 | 2.8357 | 0.4976 |
| 2.9907 | 0.9 | 110000 | 2.8082 | 0.5013 |
| 2.9619 | 0.98 | 120000 | 2.7805 | 0.5042 |
| 2.9284 | 1.07 | 130000 | 2.7578 | 0.5072 |
| 2.9027 | 1.15 | 140000 | 2.7295 | 0.5103 |
| 2.8738 | 1.23 | 150000 | 2.7094 | 0.5133 |
| 2.8603 | 1.31 | 160000 | 2.6848 | 0.5160 |
| 2.829 | 1.39 | 170000 | 2.6667 | 0.5185 |
| 2.8106 | 1.47 | 180000 | 2.6479 | 0.5208 |
| 2.7942 | 1.56 | 190000 | 2.6304 | 0.5227 |
| 2.772 | 1.64 | 200000 | 2.6156 | 0.5249 |
| 2.7546 | 1.72 | 210000 | 2.5994 | 0.5270 |
| 2.7348 | 1.8 | 220000 | 2.5858 | 0.5290 |
| 2.725 | 1.88 | 230000 | 2.5728 | 0.5304 |
| 2.7116 | 1.97 | 240000 | 2.5587 | 0.5324 |
| 2.6953 | 2.05 | 250000 | 2.5476 | 0.5338 |
| 2.6883 | 2.13 | 260000 | 2.5339 | 0.5355 |
| 2.6768 | 2.21 | 270000 | 2.5231 | 0.5371 |
| 2.6622 | 2.29 | 280000 | 2.5097 | 0.5383 |
| 2.6499 | 2.38 | 290000 | 2.5026 | 0.5396 |
| 2.6361 | 2.46 | 300000 | 2.4916 | 0.5412 |
| 2.629 | 2.54 | 310000 | 2.4843 | 0.5421 |
| 2.6269 | 2.62 | 320000 | 2.4737 | 0.5432 |
| 2.6175 | 2.7 | 330000 | 2.4676 | 0.5443 |
| 2.5961 | 2.79 | 340000 | 2.4580 | 0.5457 |
| 2.5926 | 2.87 | 350000 | 2.4502 | 0.5468 |
| 2.5866 | 2.95 | 360000 | 2.4413 | 0.5480 |
| 2.5781 | 3.03 | 370000 | 2.4340 | 0.5488 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Yireonzz/mshadows
|
Yireonzz
| 2023-06-22T17:12:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T17:07:19Z |
---
license: creativeml-openrail-m
---
|
VMware/electra-small-mrqa
|
VMware
| 2023-06-22T16:36:12Z | 251 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"electra",
"question-answering",
"en",
"dataset:mrqa",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-17T21:28:48Z |
---
license: apache-2.0
datasets:
- mrqa
language:
- en
metrics:
- exact_match
- f1
model-index:
- name: VMware/electra-small-mrqa
results:
- task:
type: Question-Answering
dataset:
type: mrqa
name: MRQA
metrics:
- type: exact_match
value: 57.63
name: Eval EM
- type: f1
value: 69.38
name: Eval F1
- type: exact_match
value: 38.68
name: Test EM
- type: f1
value: 51.56
name: Test F1
---
This model release is part of a joint research project with Howard University's Innovation Foundry/AIM-AHEAD Lab.
# Model Details
- **Model name:** ELECTRA-Small-MRQA
- **Model type:** Extractive Question Answering
- **Parent Model:** [ELECTRA-Small-Discriminator](https://huggingface.co/google/electra-small-discriminator)
- **Training dataset:** [MRQA](https://huggingface.co/datasets/mrqa) (Machine Reading for Question Answering)
- **Training data size:** 516,819 examples
- **Training time:** 2:16:36 on 1 Nvidia V100 32GB GPU
- **Language:** English
- **Framework:** PyTorch
- **Model version:** 1.0
# Intended Use
This model is intended to provide accurate answers to questions based on context passages. It can be used for a variety of tasks, including question-answering for search engines, chatbots, customer service systems, and other applications that require natural language understanding.
# How to Use
```python
from transformers import pipeline
question_answerer = pipeline("question-answering", model='VMware/electra-small-mrqa')
context = "We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems. In this task, we adapted and unified 18 distinct question answering datasets into the same format. Among them, six datasets were made available for training, six datasets were made available for development, and the final six were hidden for final evaluation. Ten teams submitted systems, which explored various ideas including data sampling, multi-task learning, adversarial training and ensembling. The best system achieved an average F1 score of 72.5 on the 12 held-out datasets, 10.7 absolute points higher than our initial baseline based on BERT."
question = "What is MRQA?"
result = question_answerer(question=question, context=context)
print(result)
# {
# 'score': 0.3399854898452759,
# 'start': 30,
# 'end': 68,
# 'answer': 'Machine Reading for Question Answering'
# }
```
# Training Details
The model was trained for 1 epoch on the MRQA training set.
## Training Hyperparameters
```python
args = TrainingArguments(
"electra-small-mrqa",
save_strategy="epoch",
learning_rate=1e-5,
num_train_epochs=1,
weight_decay=0.01,
per_device_train_batch_size=16,
)
```
# Evaluation Metrics
The model was evaluated using standard metrics for question-answering models, including:
Exact match (EM): The percentage of questions for which the model produces an exact match with the ground truth answer.
F1 score: A weighted average of precision and recall, which measures the overlap between the predicted answer and the ground truth answer.
# Model Family Performance
| Parent Language Model | Number of Parameters | Training Time | Eval Time | Test Time | Eval EM | Eval F1 | Test EM | Test F1 |
|---|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| BERT-Tiny | 4,369,666 | 26:11 | 0:41 | 0:04 | 22.78 | 32.42 | 10.18 | 18.72 |
| BERT-Base | 108,893,186 | 8:39:10 | 18:42 | 2:13 | 64.48 | 76.14 | 48.89 | 59.89 |
| BERT-Large | 334,094,338 | 28:35:38 | 1:00:56 | 7:14 | 69.52 | 80.50 | 55.00 | 65.78 |
| DeBERTa-v3-Extra-Small | 70,682,882 | 5:19:05 | 11:29 | 1:16 | 65.58 | 77.17 | 50.92 | 62.58 |
| DeBERTa-v3-Base | 183,833,090 | 12:13:41 | 28:18 | 3:09 | 71.43 | 82.59 | 59.49 | 70.46 |
| DeBERTa-v3-Large | 434,014,210 | 38:36:13 | 1:25:47 | 9:33 | **76.08** | **86.23** | **64.27** | **75.22** |
| ELECTRA-Small | 13,483,522 | 2:16:36 | 3:55 | 0:27 | 57.63 | 69.38 | 38.68 | 51.56 |
| ELECTRA-Base | 108,893,186 | 8:40:57 | 18:41 | 2:12 | 68.78 | 80.16 | 54.70 | 65.80 |
| ELECTRA-Large | 334,094,338 | 28:31:59 | 1:00:40 | 7:13 | 74.15 | 84.96 | 62.35 | 73.28 |
| MiniLMv2-L6-H384-from-BERT-Large | 22,566,146 | 2:12:48 | 4:23 | 0:40 | 59.31 | 71.09 | 41.78 | 53.30 |
| MiniLMv2-L6-H768-from-BERT-Large | 66,365,954 | 4:42:59 | 10:01 | 1:10 | 64.27 | 75.84 | 49.05 | 59.82 |
| MiniLMv2-L6-H384-from-RoBERTa-Large | 30,147,842 | 2:15:10 | 4:19 | 0:30 | 59.27 | 70.64 | 42.95 | 54.03 |
| MiniLMv2-L12-H384-from-RoBERTa-Large | 40,794,626 | 4:14:22 | 8:27 | 0:58 | 64.58 | 76.23 | 51.28 | 62.83 |
| MiniLMv2-L6-H768-from-RoBERTa-Large | 81,529,346 | 4:39:02 | 9:34 | 1:06 | 65.80 | 77.17 | 51.72 | 63.27 |
| TinyRoBERTa | 81,529.346 | 4:27:06\* | 9:54 | 1:04 | 69.38 | 80.07 | 53.29 | 64.16 |
| RoBERTa-Base | 124,056,578 | 8:50:29 | 18:59 | 2:11 | 69.06 | 80.08 | 55.53 | 66.49 |
| RoBERTa-Large | 354,312,194 | 29:16:06 | 1:01:10 | 7:04 | 74.08 | 84.38 | 62.20 | 72.88 |
\* TinyRoBERTa's training time isn't directly comparable to the other models since it was distilled from [VMware/roberta-large-mrqa](https://huggingface.co/VMware/roberta-large-mrqa) that was already trained on MRQA.
# Limitations and Bias
The model is based on a large and diverse dataset, but it may still have limitations and biases in certain areas. Some limitations include:
- Language: The model is designed to work with English text only and may not perform as well on other languages.
- Domain-specific knowledge: The model has been trained on a general dataset and may not perform well on questions that require domain-specific knowledge.
- Out-of-distribution questions: The model may struggle with questions that are outside the scope of the MRQA dataset. This is best demonstrated by the delta between its scores on the eval vs test datasets.
In addition, the model may have some bias in terms of the data it was trained on. The dataset includes questions from a variety of sources, but it may not be representative of all populations or perspectives. As a result, the model may perform better or worse for certain types of questions or on certain types of texts.
|
UnaiGurbindo/dqn-SpaceInvadersNoFrameskip-v4
|
UnaiGurbindo
| 2023-06-22T16:24:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:07:48Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 507.00 +/- 141.81
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga UnaiGurbindo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga UnaiGurbindo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga UnaiGurbindo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Heefy/Emma
|
Heefy
| 2023-06-22T16:17:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T16:17:50Z |
---
license: creativeml-openrail-m
---
|
Niftynr/falcon-7b-e_100
|
Niftynr
| 2023-06-22T16:10:52Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T16:10:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
Babelscape/mrebel-large-32
|
Babelscape
| 2023-06-22T16:09:51Z | 125 | 7 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"seq2seq",
"relation-extraction",
"translation",
"ar",
"ca",
"de",
"el",
"en",
"es",
"fr",
"hi",
"it",
"ja",
"ko",
"nl",
"pl",
"pt",
"ru",
"sv",
"vi",
"zh",
"arxiv:2306.09802",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-12T14:47:07Z |
---
language:
- ar
- ca
- de
- el
- en
- es
- fr
- hi
- it
- ja
- ko
- nl
- pl
- pt
- ru
- sv
- vi
- zh
widget:
- text: >-
I Red Hot Chili Peppers sono stati formati a Los Angeles da Kiedis, Flea, il chitarrista Hillel Slovak e il batterista Jack Irons.
example_title: "Italian"
inference:
parameters:
decoder_start_token_id: 250058
src_lang: "it_IT"
tgt_lang: "<triplet>"
tags:
- seq2seq
- relation-extraction
license: cc-by-nc-sa-4.0
pipeline_tag: translation
---
# RED<sup>FM</sup>: a Filtered and Multilingual Relation Extraction Dataset
This is a multilingual version of [REBEL](https://huggingface.co/Babelscape/rebel-large). It can be used as a standalone multulingual Relation Extraction system, or as a pretrained system to be tuned on multilingual Relation Extraction datasets.
mREBEL is introduced in the ACL 2023 paper [RED^{FM}: a Filtered and Multilingual Relation Extraction Dataset](https://arxiv.org/abs/2306.09802). We present a new multilingual Relation Extraction dataset and train a multilingual version of REBEL which reframed Relation Extraction as a seq2seq task. The paper can be found [here](https://arxiv.org/abs/2306.09802). If you use the code or model, please reference this work in your paper:
@inproceedings{huguet-cabot-et-al-2023-redfm-dataset,
title = "RED$^{\rm FM}$: a Filtered and Multilingual Relation Extraction Dataset",
author = "Huguet Cabot, Pere-Llu{\'\i}s and Tedeschi, Simone and Ngonga Ngomo, Axel-Cyrille and
Navigli, Roberto",
booktitle = "Proc. of the 61st Annual Meeting of the Association for Computational Linguistics: ACL 2023",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2306.09802",
}
The original repository for the paper can be found [here](https://github.com/Babelscape/rebel#REDFM)
Be aware that the inference widget at the right does not output special tokens, which are necessary to distinguish the subject, object and relation types. For a demo of mREBEL and its pre-training dataset check the [Spaces demo](https://huggingface.co/spaces/Babelscape/mrebel-demo).
## Pipeline usage
```python
from transformers import pipeline
triplet_extractor = pipeline('translation_xx_to_yy', model='Babelscape/mrebel-large-32', tokenizer='Babelscape/mrebel-large-32')
# We need to use the tokenizer manually since we need special tokens.
extracted_text = triplet_extractor.tokenizer.batch_decode([triplet_extractor("The Red Hot Chili Peppers were formed in Los Angeles by Kiedis, Flea, guitarist Hillel Slovak and drummer Jack Irons.", decoder_start_token_id=250058, src_lang="en_XX", tgt_lang="<triplet>", return_tensors=True, return_text=False)[0]["translation_token_ids"]]) # change en_XX for the language of the source.
print(extracted_text[0])
# Function to parse the generated text and extract the triplets
def extract_triplets_typed(text):
triplets = []
relation = ''
text = text.strip()
current = 'x'
subject, relation, object_, object_type, subject_type = '','','','',''
for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").replace("tp_XX", "").replace("__en__", "").split():
if token == "<triplet>" or token == "<relation>":
current = 't'
if relation != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
relation = ''
subject = ''
elif token.startswith("<") and token.endswith(">"):
if current == 't' or current == 'o':
current = 's'
if relation != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
object_ = ''
subject_type = token[1:-1]
else:
current = 'o'
object_type = token[1:-1]
relation = ''
else:
if current == 't':
subject += ' ' + token
elif current == 's':
object_ += ' ' + token
elif current == 'o':
relation += ' ' + token
if subject != '' and relation != '' and object_ != '' and object_type != '' and subject_type != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
return triplets
extracted_triplets = extract_triplets_typed(extracted_text[0])
print(extracted_triplets)
```
## Model and Tokenizer using transformers
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
def extract_triplets_typed(text):
triplets = []
relation = ''
text = text.strip()
current = 'x'
subject, relation, object_, object_type, subject_type = '','','','',''
for token in text.replace("<s>", "").replace("<pad>", "").replace("</s>", "").replace("tp_XX", "").replace("__en__", "").split():
if token == "<triplet>" or token == "<relation>":
current = 't'
if relation != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
relation = ''
subject = ''
elif token.startswith("<") and token.endswith(">"):
if current == 't' or current == 'o':
current = 's'
if relation != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
object_ = ''
subject_type = token[1:-1]
else:
current = 'o'
object_type = token[1:-1]
relation = ''
else:
if current == 't':
subject += ' ' + token
elif current == 's':
object_ += ' ' + token
elif current == 'o':
relation += ' ' + token
if subject != '' and relation != '' and object_ != '' and object_type != '' and subject_type != '':
triplets.append({'head': subject.strip(), 'head_type': subject_type, 'type': relation.strip(),'tail': object_.strip(), 'tail_type': object_type})
return triplets
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Babelscape/mrebel-large-32", src_lang="en_XX", tgt_lang="tp_XX")
# Here we set English ("en_XX") as source language. To change the source language swap the first token of the input for your desired language or change to supported language. For catalan ("ca_XX") or greek ("el_EL") (not included in mBART pretraining) you need a workaround:
# tokenizer._src_lang = "ca_XX"
# tokenizer.cur_lang_code_id = tokenizer.convert_tokens_to_ids("ca_XX")
# tokenizer.set_src_lang_special_tokens("ca_XX")
model = AutoModelForSeq2SeqLM.from_pretrained("Babelscape/mrebel-large-32")
gen_kwargs = {
"max_length": 256,
"length_penalty": 0,
"num_beams": 3,
"num_return_sequences": 3,
"forced_bos_token_id": None,
}
# Text to extract triplets from
text = 'The Red Hot Chili Peppers were formed in Los Angeles by Kiedis, Flea, guitarist Hillel Slovak and drummer Jack Irons.'
# Tokenizer text
model_inputs = tokenizer(text, max_length=256, padding=True, truncation=True, return_tensors = 'pt')
# Generate
generated_tokens = model.generate(
model_inputs["input_ids"].to(model.device),
attention_mask=model_inputs["attention_mask"].to(model.device),
decoder_start_token_id = tokenizer.convert_tokens_to_ids("tp_XX"),
**gen_kwargs,
)
# Extract text
decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=False)
# Extract triplets
for idx, sentence in enumerate(decoded_preds):
print(f'Prediction triplets sentence {idx}')
print(extract_triplets_typed(sentence))
```
## License
This model is licensed under the CC BY-SA 4.0 license. The text of the license can be found [here](https://creativecommons.org/licenses/by-nc-sa/4.0/).
|
WillieBaker/Conlan
|
WillieBaker
| 2023-06-22T16:07:39Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-22T16:07:07Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brunoleme/my_awesome_eli5_clm-model
|
brunoleme
| 2023-06-22T16:02:27Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T15:00:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8709 | 1.0 | 1113 | 3.7946 |
| 3.7741 | 2.0 | 2226 | 3.7780 |
| 3.7275 | 3.0 | 3339 | 3.7753 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Mtc2/q-FrozenLake-v1-4x4-noSlippery
|
Mtc2
| 2023-06-22T15:29:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:29:22Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mtc2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
HanBi/my_awesome_model
|
HanBi
| 2023-06-22T15:29:00Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:movie_rationales",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-19T09:21:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- movie_rationales
metrics:
- accuracy
model-index:
- name: my_awesome_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: movie_rationales
type: movie_rationales
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8844221105527639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the movie_rationales dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2762
- Accuracy: 0.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.4182 | 0.8040 |
| No log | 2.0 | 200 | 0.2762 | 0.8844 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
swl-models/MsceneMix-v1.1
|
swl-models
| 2023-06-22T15:28:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T15:19:40Z |
---
license: creativeml-openrail-m
---
|
S3S3/ppo-Huggy
|
S3S3
| 2023-06-22T15:27:56Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T15:27:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: S3S3/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/MsceneMix-v1.0
|
swl-models
| 2023-06-22T15:24:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T15:17:26Z |
---
license: creativeml-openrail-m
---
|
Hansr/Lycoris
|
Hansr
| 2023-06-22T15:02:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:07:38Z |
---
license: creativeml-openrail-m
---
|
jondurbin/airoboros-13b-gpt4
|
jondurbin
| 2023-06-22T14:59:53Z | 1,440 | 18 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-02T18:45:41Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4
---
## Overview
This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
*__NOTE: an earlier version claimed context length of 4096 - this did not work! I modified the code to train with with 4096, and several instructions are beyond 2048. I tested a few prompts beyond 2048, and they seem to produce fairly coherent responses with increased context length for a couple hundred tokens beyond 2048, but I did not properly test up to 4096. As it turns out, it would appear without a massive fine-tune of the base model on a larger context window, this won't work. Sorry!__*
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli
--model-path airoboros-13b-gpt4 \
--temperature 0.5 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-7b-gpt4
|
jondurbin
| 2023-06-22T14:59:45Z | 1,420 | 4 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-03T00:33:22Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4
---
## Overview
This is a fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/airoboros-gpt4), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
*Note: the example prompt response pairs below are from the 13b model, YMMV with the 7b*
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli
--model-path airoboros-7b-gpt4 \
--temperature 0.5 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
### Other stuff
#### Better coding
Example:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
<details>
<summary>Response (the output was markdown, so this is just copy paste)</summary>
To create a python application with the given requirements, you can follow these steps:
1. Install the required packages:
```bash
pip install asyncio fastapi uvicorn
```
2. Create a new file called `main.py` and add the following code:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
from uvicorn.request import Request
app = FastAPI()
class FileModel(BaseModel):
id: str
file: str
@app.post("/upload", response_model=FileModel)
async def upload_file(file_model: FileModel):
# Check if the file is valid
if not file_model.file:
raise HTTPException(status_code=400, detail="File must be provided")
# Calculate the sha256 checksum of the file
sha256_checksum = await calculate_sha256(file_model.file)
# Insert the file into the database
await insert_into_db(file_model.id, sha256_checksum)
return {"sha256_checksum": sha256_checksum}
async def calculate_sha256(file_path: str) -> str:
# Implement the sha256 calculation logic here
pass
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
# Implement the database insertion logic here
pass
async def get_current_date():
return {"date": datetime.now().isoformat()}
async def main():
# Start the FastAPI webserver
app.run(debug=True)
if __name__ == "__main__":
await main()
```
3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example:
```python
async def calculate_sha256(file_path: str) -> str:
import hashlib
file_path = Path(file_path).read_text()
hash_obj = hashlib.sha256()
hash_obj.update(file_path)
sha256_checksum = hash_obj.hexdigest()
return sha256_checksum
async def insert_into_db(file_id: str, sha256_checksum: str) -> None:
import psycopg2
conn = psycopg2.connect(database="your_database", user="your_user", password="your_password")
cursor = conn.cursor()
query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);"
cursor.execute(query, (file_id, sha256_checksum))
conn.commit()
cursor.close()
conn.close()
```
4. Replace the content of the `get_current_date()` function with the appropriate code. For example:
```python
async def get_current_date():
return {"date": datetime.now().isoformat()}
```
5. Run the application:
```bash
uvicorn main:app --reload
```
Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate.
</details>
<details>
<summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary>
Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values.
```c
#include <stdio.h>
#include <stdlib.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <string.h>
#include <pthread.h>
#define MAX_KEY_LEN 256
#define MAX_VALUE_LEN 256
typedef struct Node {
char key[MAX_KEY_LEN];
char value[MAX_VALUE_LEN];
struct Node* left;
struct Node* right;
} TreeNode;
TreeNode* tree = NULL;
void insert(char* key, char* value) {
TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode));
if (!parent) {
perror("malloc failed");
exit(1);
}
strcpy(parent->key, key);
strcpy(parent->value, value);
parent->left = parent->right = NULL;
if (!tree) {
tree = parent;
} else {
insert(parent->key, parent->value);
}
}
char* search(char* key) {
TreeNode* current = tree;
while (current != NULL) {
if (strcmp(key, current->key) == 0) {
return current->value;
}
current = current->right;
}
return NULL;
}
void print_tree() {
TreeNode* current = tree;
while (current != NULL) {
printf("%s = %s\n", current->key, current->value);
current = current->right;
}
}
void* accept_client(void* args) {
int client_socket = *(int*)args;
char buffer[1024];
while (1) {
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
char* input = buffer;
char* input_token;
while ((input_token = strtok(input, " \n")) != NULL) {
switch (input_token[0]) {
case 'G':
if (!search(input_token)) {
perror("Value not found");
exit(1);
}
printf("%s = %s\n", input_token, search(input_token));
break;
case 'S':
if (!strcmp(input_token, "SET")) {
char* key = strtok(NULL, " \n");
char* value = strtok(NULL, " \n");
if (!key || !value) {
perror("Invalid input");
exit(1);
}
insert(key, value);
printf("%s set %s to %s\n", input_token, key, value);
} else {
perror("Invalid command");
exit(1);
}
break;
default:
perror("Invalid input");
exit(1);
}
input = strtok(NULL, " \n");
}
}
return NULL;
}
int main() {
int server_socket = socket(AF_INET, SOCK_STREAM, 0);
if (server_socket < 0) {
perror("socket failed");
exit(1);
}
struct sockaddr_in server_addr;
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin_family = AF_INET;
server_addr.sin_port = htons(8080);
server_addr.sin_addr.s_addr = INADDR_ANY;
if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) {
perror("bind failed");
exit(1);
}
if (listen(server_socket, 5) < 0) {
perror("listen failed");
exit(1);
}
pthread_t accept_thread;
pthread_create(&accept_thread, NULL, accept_client, &server_socket);
char* client_input;
int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL);
if (client_socket < 0) {
perror("accept failed");
exit(1);
}
while (1) {
sleep(1);
char buffer[1024];
ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0);
if (bytes_received <= 0) {
close(client_socket);
break;
}
client_input = buffer;
parse_input(client_input);
}
close(client_socket);
pthread_join(accept_thread, NULL);
return 0;
}
```
To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands:
```sh
gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c
./multi_threaded_tcp_server
```
This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands.
Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree.
</details>
#### Word games / trivia
```
USER: This anagram of "silent" is what you do when paying attention and hearing someone.
ASSISTANT: listen
```
```
USER: Give me a list of 7 words that start with EN
ASSISTANT: 1. Endlessly
2. Enthusiastically
3. Enormously
4. Enveloping
5. Enduring
6. Enlightening
7. Encouraging
```
#### Multiple choice
```
USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt?
A) Mercury
B) Venus
C) Pluto
D) Jupiter
ASSISTANT: [C, Pluto]
```
#### Writing
<details>
<summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary>
Response:
```
ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home.
Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes.
No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice.
So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary>
```
</details>
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-65b-gpt4-1.2
|
jondurbin
| 2023-06-22T14:59:15Z | 1,424 | 22 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-14T09:19:02Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.1), but with a 65b model and thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-65b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-65B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-13b-gpt4-1.2
|
jondurbin
| 2023-06-22T14:59:01Z | 1,434 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T09:26:24Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-13b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-7b-gpt4-1.2
|
jondurbin
| 2023-06-22T14:58:53Z | 1,432 | 28 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.2",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-15T16:02:29Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.2
---
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc.
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on:
- coding
- math/reasoning (using orca style ELI5 instruction/response pairs)
- trivia
- role playing
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the previous versions:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.2 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
Alternatively, please check out TheBloke's quantized versions:
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GPTQ
- https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GGML
### Coding updates from gpt4/1.1:
I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting.
It's not guaranteed to work all the time, but mostly it does seem to work as expected.
So for example, instead of:
```
Implement the Snake game in python.
```
You would use:
```
Implement the Snake game in python. PLAINFORMAT
```
### Other updates from gpt4/1.1:
- Several hundred role-playing data.
- A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question).
- Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.)
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
jondurbin/airoboros-7b-gpt4-1.3
|
jondurbin
| 2023-06-22T14:58:20Z | 1,429 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:jondurbin/airoboros-gpt4-1.3",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-20T07:09:09Z |
---
license: cc-by-nc-4.0
datasets:
- jondurbin/airoboros-gpt4-1.3
---
__This version has problems, use if you dare, or wait for 1.4.__
### Overview
This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2) with a few enhancements:
- All coding instructions have an equivalent " PLAINFORMAT" version now.
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
### Usage
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
```
pip install git+https://github.com/jondurbin/FastChat
```
Be sure you are pulling the latest branch!
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-7b-gpt4-1.3 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Usage and License Notices
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
- the base model is LLaMa, which has it's own special research license
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
So, to reiterate: this model (and datasets) cannot be used commercially.
|
swl-models/CuteYukiMix-KawaShow
|
swl-models
| 2023-06-22T14:58:18Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:50:37Z |
---
license: creativeml-openrail-m
---
|
Barianc/distilroberta-base-finetuned-wikitext2
|
Barianc
| 2023-06-22T14:58:01Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T14:16:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0852 | 1.0 | 2406 | 1.9234 |
| 1.992 | 2.0 | 4812 | 1.8828 |
| 1.9603 | 3.0 | 7218 | 1.8223 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Serendipity34/my_awesome_eli5_clm-model
|
Serendipity34
| 2023-06-22T14:56:27Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T12:15:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8711 | 1.0 | 1134 | 3.7645 |
| 3.7705 | 2.0 | 2268 | 3.7486 |
| 3.7324 | 3.0 | 3402 | 3.7448 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
swl-models/CuteYukiMix-b-X
|
swl-models
| 2023-06-22T14:53:54Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:49:12Z |
---
license: creativeml-openrail-m
---
|
swl-models/CuteYukiMix-v4.0
|
swl-models
| 2023-06-22T14:51:39Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:27Z |
---
license: creativeml-openrail-m
---
|
swl-models/CuteYukiMix-v3.0
|
swl-models
| 2023-06-22T14:49:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:34:19Z |
---
license: creativeml-openrail-m
---
|
savasy/bert-turkish-uncased-qnli
|
savasy
| 2023-06-22T14:42:01Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Turkish QNLI Model
I fine-tuned Turkish-Bert-Model for Question-Answering problem with Turkish version of SQuAD; TQuAD
https://huggingface.co/dbmdz/bert-base-turkish-uncased
# Data: TQuAD
I used following TQuAD data set
https://github.com/TQuad/turkish-nlp-qa-dataset
I convert the dataset into transformers glue data format of QNLI by the following script
SQuAD -> QNLI
```
import argparse
import collections
import json
import numpy as np
import os
import re
import string
import sys
ff="dev-v0.1.json"
ff="train-v0.1.json"
dataset=json.load(open(ff))
i=0
for article in dataset['data']:
title= article['title']
for p in article['paragraphs']:
context= p['context']
for qa in p['qas']:
answer= qa['answers'][0]['text']
all_other_answers= list(set([e['answers'][0]['text'] for e in p['qas']]))
all_other_answers.remove(answer)
i=i+1
print(i,qa['question'].replace(";",":") , answer.replace(";",":"),"entailment", sep="\t")
for other in all_other_answers:
i=i+1
print(i,qa['question'].replace(";",":") , other.replace(";",":"),"not_entailment" ,sep="\t")
```
Under QNLI folder there are dev and test test
Training data looks like
> 613 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? filozof, kimyacı, astrolog ve çevirmen not_entailment
> 614 II.Friedrich’in bilginler arasındaki en önemli şahsiyet olarak belirttiği kişi kimdir? kişisel eğilimi ve özel temaslar nedeniyle not_entailment
> 615 Michael Scotus’un mesleği nedir? filozof, kimyacı, astrolog ve çevirmen entailment
> 616 Michael Scotus’un mesleği nedir? Palermo’ya not_entailment
# Training
Training the model with following environment
```
export GLUE_DIR=./glue/glue_dataTR/QNLI
export TASK_NAME=QNLI
```
```
python3 run_glue.py \
--model_type bert \
--model_name_or_path dbmdz/bert-base-turkish-uncased\
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
# Evaluation Results
==
| acc | 0.9124060613527165
| loss| 0.21582801340189717
==
> See all my model
> https://huggingface.co/savasy
|
user1251/soccer_finetuned_model2_final5
|
user1251
| 2023-06-22T14:40:51Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:39:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 60 | 1.8761 |
| No log | 2.0 | 120 | 1.5666 |
| No log | 3.0 | 180 | 1.4985 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
espnet/dongwei_ami_vad_rnn
|
espnet
| 2023-06-22T14:39:27Z | 0 | 0 | null |
[
"arxiv:1804.00015",
"region:us"
] | null | 2023-06-22T14:19:21Z |
## Environments
- date: `Thu May 4 10:25:40 EDT 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 1.8.1`
- Git hash: `1bd1db914b21bfb5ae5acbe2fc7162e3815ed260`
- Commit date: `Thu May 4 08:48:15 2023 -0400`
## Model info
- Model link: https://huggingface.co/espnet/dongwei_ami_vad_rnn
- ASR config: conf/tuning/train_vad_rnn.yaml
- Decode config: conf/tuning/decode_rnn.yaml
## exp/vad_train_asr_transformer_raw
### PRECISION
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9311|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9547|
### RECALL
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9277|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9412|
### F1_SCORE
|dataset|value|
|---|---|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_dev/result.txt|0.9294|
|exp/vad_train_asr_transformer_raw/decode_rnn_vad_model_valid.acc.ave/ihm_eval/result.txt|0.9479|
## VAD config
<details><summary>expand</summary>
```
config: conf/tuning/train_vad_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/vad_train_vad_rnn_raw
ngpu: 1
seed: 0
num_workers: 3
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 2
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 14000000
valid_batch_bins: null
train_shape_file:
- exp/vad_stats_raw/train/speech_shape
- exp/vad_stats_raw/train/text_shape
valid_shape_file:
- exp/vad_stats_raw/valid/speech_shape
- exp/vad_stats_raw/valid/text_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/ihm_train/wav.scp
- speech
- sound
- - dump/raw/ihm_train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/ihm_dev/wav.scp
- speech
- sound
- - dump/raw/ihm_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.003
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
pre_postencoder_norm: false
init: null
input_size: null
use_preprocessor: true
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
segment_length: 10.0
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/vad_stats_raw/train/feats_stats.npz
model: espnet
model_conf:
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: rnn
encoder_conf:
rnn_type: gru
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 320
output_size: 320
dropout: 0.2
subsample:
- 1
- 1
- 1
- 1
required:
- output_dir
version: '202304'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
swl-models/ColorBox
|
swl-models
| 2023-06-22T14:37:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T14:32:57Z |
---
license: creativeml-openrail-m
---
|
user1251/soccer_finetuned_model2_final4
|
user1251
| 2023-06-22T14:27:04Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:17:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final4
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3698 | 1.0 | 614 | 0.8841 |
| 0.9091 | 2.0 | 1228 | 0.7799 |
| 0.8325 | 3.0 | 1842 | 0.7534 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
UnaiGurbindo/q-FrozenLake-v1-4x4-noSlippery
|
UnaiGurbindo
| 2023-06-22T14:26:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T14:26:46Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="UnaiGurbindo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bartuso/stable-diffusion-oxified
|
bartuso
| 2023-06-22T14:15:45Z | 30 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T14:02:28Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: an image of the oxenai ox
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - bartuso/stable-diffusion-oxified
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on an image of the oxenai ox using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
evatan/cucumber_w_prior
|
evatan
| 2023-06-22T14:11:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T13:46:00Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks cucumber
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/cucumber_w_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks cucumber using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
user1251/soccer_finetuned_model2_final3
|
user1251
| 2023-06-22T14:08:06Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T14:06:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model2_final3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model2_final3
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 61 | 1.8610 |
| No log | 2.0 | 122 | 1.5670 |
| No log | 3.0 | 183 | 1.4985 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
nev/dalle-mini-pytorch
|
nev
| 2023-06-22T14:04:21Z | 173 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
The small DALLE-mini converted to PyTorch
[Colab](https://colab.research.google.com/drive/1Blh-hTfhyry-YvitH8A95Duzwtm17Xz-?usp=sharing)
|
ricklicona/bert-finetuned-ner
|
ricklicona
| 2023-06-22T14:02:41Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-21T14:24:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9357722231418639
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.9435032963364767
- name: Accuracy
type: accuracy
value: 0.9867840113027609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0580
- Precision: 0.9358
- Recall: 0.9514
- F1: 0.9435
- Accuracy: 0.9868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0849 | 1.0 | 1756 | 0.0663 | 0.9118 | 0.9355 | 0.9235 | 0.9829 |
| 0.0353 | 2.0 | 3512 | 0.0600 | 0.9277 | 0.9480 | 0.9377 | 0.9859 |
| 0.019 | 3.0 | 5268 | 0.0580 | 0.9358 | 0.9514 | 0.9435 | 0.9868 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.1.0.dev20230616
- Datasets 2.12.0
- Tokenizers 0.13.3
|
guilleguells/cypher-7b-SmallModel
|
guilleguells
| 2023-06-22T13:48:57Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T13:48:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
rodrigoclira/dqn-SpaceInvadersNoFrameskip-v4
|
rodrigoclira
| 2023-06-22T13:44:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:44:17Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 499.50 +/- 185.11
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rodrigoclira -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rodrigoclira -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rodrigoclira
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
evatan/alvan_dog_wo_prior
|
evatan
| 2023-06-22T13:34:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-22T13:18:25Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/alvan_dog_wo_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Koantek/dolly_llama-v2
|
Koantek
| 2023-06-22T13:33:33Z | 5 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T12:05:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ag159/poca-SoccerTwos
|
ag159
| 2023-06-22T13:26:48Z | 40 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:24:17Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ag159/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
guilleguells/cypher-7b-BigModel
|
guilleguells
| 2023-06-22T13:24:36Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T13:24:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
swl-models/DarkSushiMix-Darker
|
swl-models
| 2023-06-22T13:13:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:11:45Z |
---
license: creativeml-openrail-m
---
|
ighina/roberta_topseg_softmax_mean_wikicity
|
ighina
| 2023-06-22T13:13:27Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-22T13:04:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11254 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NasimB/gpt2_left_out_wikipedia
|
NasimB
| 2023-06-22T13:11:14Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:54:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2_left_out_wikipedia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_left_out_wikipedia
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.8141 | 0.27 | 500 | 4.8520 |
| 4.5861 | 0.53 | 1000 | 4.4909 |
| 4.3045 | 0.8 | 1500 | 4.2742 |
| 4.0861 | 1.07 | 2000 | 4.1490 |
| 3.9278 | 1.33 | 2500 | 4.0562 |
| 3.8591 | 1.6 | 3000 | 3.9800 |
| 3.7835 | 1.87 | 3500 | 3.9083 |
| 3.6499 | 2.13 | 4000 | 3.8799 |
| 3.567 | 2.4 | 4500 | 3.8381 |
| 3.5361 | 2.67 | 5000 | 3.7975 |
| 3.5278 | 2.93 | 5500 | 3.7552 |
| 3.3555 | 3.2 | 6000 | 3.7622 |
| 3.3265 | 3.47 | 6500 | 3.7426 |
| 3.3305 | 3.73 | 7000 | 3.7122 |
| 3.3246 | 4.0 | 7500 | 3.6889 |
| 3.0968 | 4.27 | 8000 | 3.7216 |
| 3.1248 | 4.53 | 8500 | 3.7057 |
| 3.1354 | 4.8 | 9000 | 3.6846 |
| 3.0701 | 5.07 | 9500 | 3.7066 |
| 2.8974 | 5.33 | 10000 | 3.7183 |
| 2.9258 | 5.6 | 10500 | 3.7096 |
| 2.9387 | 5.87 | 11000 | 3.6943 |
| 2.7975 | 6.13 | 11500 | 3.7369 |
| 2.6972 | 6.4 | 12000 | 3.7468 |
| 2.7193 | 6.67 | 12500 | 3.7422 |
| 2.7233 | 6.93 | 13000 | 3.7337 |
| 2.5434 | 7.2 | 13500 | 3.7783 |
| 2.5072 | 7.47 | 14000 | 3.7864 |
| 2.5183 | 7.73 | 14500 | 3.7869 |
| 2.5263 | 8.0 | 15000 | 3.7838 |
| 2.3533 | 8.27 | 15500 | 3.8174 |
| 2.3661 | 8.53 | 16000 | 3.8220 |
| 2.3659 | 8.8 | 16500 | 3.8246 |
| 2.3462 | 9.07 | 17000 | 3.8313 |
| 2.286 | 9.33 | 17500 | 3.8359 |
| 2.2867 | 9.6 | 18000 | 3.8367 |
| 2.2885 | 9.87 | 18500 | 3.8366 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
A1abz/ppo-Huggy
|
A1abz
| 2023-06-22T13:08:49Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T13:08:44Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: A1abz/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/DarkSushiMix-Colorful
|
swl-models
| 2023-06-22T13:02:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T13:01:07Z |
---
license: creativeml-openrail-m
---
|
serkanBurakOrs/poca-SoccerTwos
|
serkanBurakOrs
| 2023-06-22T13:00:34Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-17T13:38:25Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: serkanBurakOrs/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
swl-models/DarkSushiMix-2.25D
|
swl-models
| 2023-06-22T12:59:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T12:57:05Z |
---
license: creativeml-openrail-m
---
|
janezb/sloberta-finetuned-dlib-1850-1919
|
janezb
| 2023-06-22T12:43:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"sl",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-22T12:31:10Z |
---
license: cc-by-sa-4.0
language:
- sl
pipeline_tag: fill-mask
---
This is based on SloBERTa (https://huggingface.co/EMBEDDIA/sloberta) but fine-tuned for 5 epochs
on the text of all Slovenian-language documents available on the Slovenian Digital Library (https://dlib.si)
from the period 1850-1919. This was about 8.2 GB of text. Note that it also contained a lot of OCR errors.
|
gokuls/sa_bert_12_layer_modified_complete_training_48
|
gokuls
| 2023-06-22T12:41:33Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-20T10:02:27Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sa_bert_12_layer_modified_complete_training_48
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sa_bert_12_layer_modified_complete_training_48
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7897
- Accuracy: 0.5117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 6.5942 | 0.05 | 10000 | 6.5714 | 0.1229 |
| 6.1563 | 0.11 | 20000 | 6.3437 | 0.1392 |
| 6.1425 | 0.16 | 30000 | 6.2474 | 0.1444 |
| 6.2249 | 0.22 | 40000 | 6.1900 | 0.1468 |
| 6.1498 | 0.27 | 50000 | 6.1482 | 0.1487 |
| 6.0528 | 0.33 | 60000 | 6.1192 | 0.1492 |
| 6.0103 | 0.38 | 70000 | 6.0762 | 0.1504 |
| 5.8523 | 0.44 | 80000 | 5.8731 | 0.1615 |
| 5.91 | 0.49 | 90000 | 5.7442 | 0.1765 |
| 5.4931 | 0.55 | 100000 | 5.5985 | 0.1952 |
| 5.4145 | 0.6 | 110000 | 5.4716 | 0.2100 |
| 5.3729 | 0.66 | 120000 | 5.3366 | 0.2247 |
| 5.2655 | 0.71 | 130000 | 5.1946 | 0.2417 |
| 5.2975 | 0.76 | 140000 | 5.0287 | 0.2600 |
| 4.9997 | 0.82 | 150000 | 4.8593 | 0.2791 |
| 4.831 | 0.87 | 160000 | 4.6226 | 0.3041 |
| 4.9176 | 0.93 | 170000 | 4.4211 | 0.3257 |
| 4.5352 | 0.98 | 180000 | 4.2328 | 0.3429 |
| 4.1536 | 1.04 | 190000 | 4.0635 | 0.3598 |
| 4.0216 | 1.09 | 200000 | 3.9109 | 0.3755 |
| 4.0744 | 1.15 | 210000 | 3.7761 | 0.3897 |
| 3.7468 | 1.2 | 220000 | 3.6636 | 0.4038 |
| 3.5015 | 1.26 | 230000 | 3.5047 | 0.4236 |
| 3.5717 | 1.31 | 240000 | 3.4014 | 0.4370 |
| 3.1969 | 1.37 | 250000 | 3.3173 | 0.4479 |
| 3.5026 | 1.42 | 260000 | 3.2254 | 0.4588 |
| 3.287 | 1.47 | 270000 | 3.1845 | 0.4643 |
| 3.3462 | 1.53 | 280000 | 3.0979 | 0.4738 |
| 3.3996 | 1.58 | 290000 | 3.0808 | 0.4764 |
| 3.2324 | 1.64 | 300000 | 3.0163 | 0.4843 |
| 3.0972 | 1.69 | 310000 | 2.9738 | 0.4890 |
| 3.1621 | 1.75 | 320000 | 2.9450 | 0.4927 |
| 3.0282 | 1.8 | 330000 | 2.9135 | 0.4964 |
| 3.0674 | 1.86 | 340000 | 2.9059 | 0.4979 |
| 2.9437 | 1.91 | 350000 | 2.8810 | 0.5007 |
| 2.8208 | 1.97 | 360000 | 2.8316 | 0.5064 |
| 2.9005 | 2.02 | 370000 | 2.8061 | 0.5098 |
| 2.7574 | 2.08 | 380000 | 2.7897 | 0.5117 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.13.0
- Tokenizers 0.13.3
|
kebei/poca-SoccerTwos
|
kebei
| 2023-06-22T12:35:53Z | 14 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-22T12:35:46Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: kebei/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheBloke/Flan-OpenLlama-7B-GGML
|
TheBloke
| 2023-06-22T12:28:59Z | 0 | 8 | null |
[
"license:other",
"region:us"
] | null | 2023-06-22T08:56:04Z |
---
inference: false
license: other
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Concept of Mind's Flan Open Llama 7B GGML
These files are GGML format model files for [Concept of Mind's Flan Open Llama 7B](https://huggingface.co/conceptofmind/Flan-Open-Llama-7b).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Flan-OpenLlama-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Flan-OpenLlama-7B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/conceptofmind/Flan-Open-Llama-7b)
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| flan-openllama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.87 GB | 5.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| flan-openllama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.60 GB | 6.10 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| flan-openllama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.28 GB | 5.78 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| flan-openllama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.95 GB | 5.45 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| flan-openllama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB | 6.29 GB | Original llama.cpp quant method, 4-bit. |
| flan-openllama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB | 6.71 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| flan-openllama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.08 GB | 6.58 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| flan-openllama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.83 GB | 6.33 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| flan-openllama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB | 7.13 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| flan-openllama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB | 7.56 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| flan-openllama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.78 GB | 7.28 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| flan-openllama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.65 GB | 7.15 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| flan-openllama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB | 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| flan-openllama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB | 9.66 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m flan-openllama-7b.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Concept of Mind's Flan Open Llama 7B
No original model card was provided.
|
wordcab/whisper-large-fp16-ru
|
wordcab
| 2023-06-22T12:23:03Z | 3 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T08:02:51Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
wordcab/whisper-large-int8-ru
|
wordcab
| 2023-06-22T12:21:04Z | 3 | 0 |
transformers
|
[
"transformers",
"ru",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-06-22T07:37:41Z |
---
license: apache-2.0
language:
- ru
---
This is a ctranslate2 int8 version of the [mitchelldehaven/whisper-large-v2-ru](https://huggingface.co/mitchelldehaven/whisper-large-v2-ru) model.
|
DHISNEMO/finetuning-sentiment-model-3000-samples
|
DHISNEMO
| 2023-06-22T11:55:12Z | 94 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T11:18:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.82
- name: F1
type: f1
value: 0.8211920529801325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
- Accuracy: 0.82
- F1: 0.8212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
hbacard/bert-fine-tuned-cola
|
hbacard
| 2023-06-22T10:37:49Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T10:20:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5906590396340186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8240
- Matthews Correlation: 0.5907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4764 | 1.0 | 1069 | 0.5198 | 0.4949 |
| 0.3207 | 2.0 | 2138 | 0.6520 | 0.5757 |
| 0.1841 | 3.0 | 3207 | 0.8240 | 0.5907 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AIfenaike/CoQA-bloom-560m
|
AIfenaike
| 2023-06-22T10:26:41Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-22T10:26:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
A1abz/ppo-LunarLander-v2
|
A1abz
| 2023-06-22T10:22:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:22:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.90 +/- 20.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jnwprk/hate_detection_model
|
jnwprk
| 2023-06-22T10:18:17Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-22T09:42:59Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: hate_detection_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hate_detection_model
This model is a fine-tuned version of [sangrimlee/bert-base-multilingual-cased-nsmc](https://huggingface.co/sangrimlee/bert-base-multilingual-cased-nsmc) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2937
- Accuracy: 0.7686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 62 | 0.4613 | 0.7834 |
| No log | 2.0 | 124 | 0.5033 | 0.7516 |
| No log | 3.0 | 186 | 0.4699 | 0.7898 |
| No log | 4.0 | 248 | 0.5516 | 0.7516 |
| No log | 5.0 | 310 | 0.6990 | 0.7219 |
| No log | 6.0 | 372 | 0.6500 | 0.7665 |
| No log | 7.0 | 434 | 0.7347 | 0.7856 |
| No log | 8.0 | 496 | 0.9104 | 0.7389 |
| 0.3218 | 9.0 | 558 | 0.7689 | 0.8153 |
| 0.3218 | 10.0 | 620 | 0.9496 | 0.7792 |
| 0.3218 | 11.0 | 682 | 0.9598 | 0.7707 |
| 0.3218 | 12.0 | 744 | 1.2402 | 0.7091 |
| 0.3218 | 13.0 | 806 | 1.1616 | 0.7537 |
| 0.3218 | 14.0 | 868 | 1.0903 | 0.7771 |
| 0.3218 | 15.0 | 930 | 1.3674 | 0.7304 |
| 0.3218 | 16.0 | 992 | 1.1962 | 0.7728 |
| 0.0623 | 17.0 | 1054 | 1.3640 | 0.7452 |
| 0.0623 | 18.0 | 1116 | 1.3093 | 0.7622 |
| 0.0623 | 19.0 | 1178 | 1.3108 | 0.7707 |
| 0.0623 | 20.0 | 1240 | 1.2937 | 0.7686 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
getrajeev03/bart-large-cnn-samsum
|
getrajeev03
| 2023-06-22T10:14:53Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T08:39:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: bart-large-cnn-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 40.1703
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-samsum
This model is a fine-tuned version of [philschmid/bart-large-cnn-samsum](https://huggingface.co/philschmid/bart-large-cnn-samsum) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4821
- Rouge1: 40.1703
- Rouge2: 20.2613
- Rougel: 30.8068
- Rougelsum: 37.4968
- Gen Len: 60.0366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.1917 | 1.0 | 7366 | 1.4821 | 40.1703 | 20.2613 | 30.8068 | 37.4968 | 60.0366 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.0
- Tokenizers 0.11.0
|
Leukschrauber/Taxi-v3
|
Leukschrauber
| 2023-06-22T10:05:25Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-22T10:05:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Leukschrauber/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
VMVstudio/neutral
|
VMVstudio
| 2023-06-22T10:02:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-22T10:02:20Z |
---
license: creativeml-openrail-m
---
|
hts98/whisper-large-v2-paper_
|
hts98
| 2023-06-22T10:02:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-22T06:34:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-v2-paper_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-v2-paper_
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 47.7467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 143 | 0.3626 | 71.8596 |
| No log | 2.0 | 286 | 0.3398 | 50.4925 |
| No log | 3.0 | 429 | 0.3426 | 52.2600 |
| 0.3684 | 4.0 | 572 | 0.3541 | 46.2800 |
| 0.3684 | 5.0 | 715 | 0.3721 | 46.6114 |
| 0.3684 | 6.0 | 858 | 0.4133 | 47.7467 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
nolanspecter/ppo-Huggy
|
nolanspecter
| 2023-06-22T09:44:55Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-22T09:44:13Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nolanspecter/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
user1251/soccer_finetuned_model_final5
|
user1251
| 2023-06-22T09:33:06Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:28:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: soccer_finetuned_model_final5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# soccer_finetuned_model_final5
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 189 | 3.9536 |
| No log | 2.0 | 378 | 3.9239 |
| 3.7068 | 3.0 | 567 | 3.9197 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
dhifanrazaqa/t5-end2end-questions-generation
|
dhifanrazaqa
| 2023-06-22T09:19:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-02T06:51:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [muchad/idt5-base](https://huggingface.co/muchad/idt5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3156 | 0.34 | 100 | 2.2625 |
| 2.5509 | 0.67 | 200 | 2.0394 |
| 2.3619 | 1.01 | 300 | 1.9596 |
| 2.2501 | 1.34 | 400 | 1.9272 |
| 2.2 | 1.68 | 500 | 1.9074 |
| 2.1682 | 2.02 | 600 | 1.8882 |
| 2.1222 | 2.35 | 700 | 1.8893 |
| 2.0874 | 2.69 | 800 | 1.8722 |
| 2.0751 | 3.03 | 900 | 1.8656 |
| 2.0501 | 3.36 | 1000 | 1.8506 |
| 2.0338 | 3.7 | 1100 | 1.8491 |
| 2.0326 | 4.03 | 1200 | 1.8428 |
| 1.9992 | 4.37 | 1300 | 1.8445 |
| 2.0081 | 4.71 | 1400 | 1.8449 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
chansung/gpt4-alpaca-lora-13b-decapoda-1024
|
chansung
| 2023-06-22T09:16:45Z | 0 | 4 | null |
[
"safetensors",
"alpaca",
"llama",
"chat",
"gpt4",
"text2text-generation",
"en",
"license:apache-2.0",
"region:us"
] |
text2text-generation
| 2023-04-17T01:27:23Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
tags:
- alpaca
- llama
- chat
- gpt4
---
This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system. In this fine-tuning process, the length of tokens is set to `1024` since GPT tends to generates long texts
- Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation
- Training script:
```shell
python finetune.py \
--base_model='decapoda-research/llama-13b-hf' \
--data_path='alpaca_data_gpt4.json' \
--num_epochs=10 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./gpt4-alpaca-lora-13b' \
--lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
--lora_r=16 \
--batch_size=... \
--micro_batch_size=...
```
You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/s5xxpdal?workspace=user-chansung18).
## Example output
1. Example #1
```
Instruction: List all Canadian provinces in alphabetical order.
Response: List of Canadian provinces in alphabetical order:
Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Northwest Territories, Nova Scotia, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan.
Instruction: Which ones are on the east side?
Response: Eastern provinces of Canada are: Newfoundland and Labrador, Nova Scotia, Prince Edward Island, New Brunswick.
Instruction: What foods are famous in each province on the east side?
Response: Newfoundland and Labrador: Jiggs Dinner, Screech, Partridgeberry Wine, Flipper Pie, Nanny’s Cookies. Nova Scotia: Donair, Pizza Burger, Apple Pie, Halifax Donair. New Brunswick:
Fried Dough (Malpeque Doughnuts), Poutine, Acadian Foods (Cote-de-beouf, potato chips, cabbage rolls, pickled beets). Prince Edward Island: Coleslaw, Lobster, Potato Bread, Blueberry Pies, Butter Tarts.
Instruction: What about sightseeing? or landmarks? list one per province
Response: Newfoundland and Labrador: Cabot Tower (St. John’s), Terra Nova National Park, Gros Morne National Park, Signal Hill. Nova Scotia: Cabot Trail, Peggy’s Cove, Lighthouses, Halifax Waterfront. New Brunswick: Fundy National Park, Bay of Fundy Tides,
Loyalist House, Hopewell Rocks. Prince Edward Island: Green Gables Shore, Cavendish National Park, Anne of Green Gables House.
```
2. Example #2
```
Instruction: Tell me about alpacas.
Response: Alpacas are a type of llama that live in the Andes mountains in South America. They are native to Peru, Chile, Bolivia, and Ecuador, where they are valued for their fiber. Alpacas are related to camels and they are cousins of the llama. Alpacas have woolly fur that comes in many colors and they are very gentle creatures. They are considered the smaller cousins of the llama and they are about the size of a medium dog. Alpacas have long necks and long, lush fur around their necks, which is known as their “ruff.” They also have long lashes that help keep dust and debris from their eyes. Alpacas have hooves that help them move easily through the rocky terrain of the Andes mountains. They make a soft “humbing” sound when they are content. Alpacas live in herds, and females are called “huacas,” and males are called “chancos.” Alpacas can live for up to 20 years. Alpacas can come in a variety of colors, including white, black, brown,
Instruction: What other animals are living in the same area?
Response: Alpacas live in the Andes mountains, and in that region there are many other animals. There are condors, vicunas, pumas, rheas, foxes, chinchillas, and more.
Instruction: Are they the same species?
Response: Alpacas and llamas are both members of the camelid family, but there are differences. Alpacas have longer necks, smaller ears, and shorter, thicker wool. Llamas are larger and their wool is coarser.
```
3. Example #3
```
Instruction: Tell me about the king of France in 2019.
Response: In 2019, King of France is not applicable, as France does not have a monarch. It has been a republic since the 1790s and its constitution prohibits the monarchy from being reinstated. The President of France is currently Emmanuel Macron.
Instruction: What about before him?
Response: Before Emmanuel Macron, there were 5 other presidents: Nicolas Sarkozy, Jacques Chirac, François Mitterrand, Valéry Giscard d’Estaing, and Georges Pompidou.
```
4. Example #4
```
Instruction: Write a Python program that prints the first 10 Fibonacci numbers.
Response:
def fibonacci_numbers():
a, b = 0, 1
for i in range(10):
a, b = b, a + b
print(a, b)
fibonacci_numbers()
Output: 0 1 1 2 3 5 8 13 21 34
Instruction: Could you explain how the code works?
Response: First, the function fibonacci_numbers() is defined. The range function creates a range of numbers, in this case between 0 and 10.
Inside the loop, we initialize two variables a and b to 0 and 1 respectively. a, b represent the previous and current Fibonacci numbers respectively. For each number in the range, we add the previous Fibonacci number b to the current one a.
At the end of the loop, the a and b variables will hold the Fibonacci numbers from 0 to 10. This is then printed on the screen.
Instruction: What is recursion?
Response: Recursion is a way of defining a method in which the method calls itself until a certain condition is met. In Python, recursion is used when there is a task that can be broken into smaller tasks that can be done recursively. A recursive function can be defined as any function that calls itself in the body of the function.
```
|
sivateja-trustt/falcon7b
|
sivateja-trustt
| 2023-06-22T09:14:46Z | 0 | 1 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-06-22T08:37:34Z |
---
license: apache-2.0
language:
- en
---
|
rudzhehdehd/Love_Letter
|
rudzhehdehd
| 2023-06-22T09:13:17Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-22T09:07:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: Love_Letter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Love_Letter
This model is a fine-tuned version of [EasthShin/BTS_Lyrics_GPT-Neo-base](https://huggingface.co/EasthShin/BTS_Lyrics_GPT-Neo-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 200 | 1.2588 |
| No log | 2.0 | 400 | 1.1366 |
| 1.3097 | 3.0 | 600 | 1.1046 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Naonori/billsum_model_for_test
|
Naonori
| 2023-06-22T09:03:50Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-22T09:01:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum_model_for_test
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model_for_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4420
- Rouge1: 0.1461
- Rouge2: 0.0524
- Rougel: 0.121
- Rougelsum: 0.121
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7503 | 0.1244 | 0.035 | 0.105 | 0.1052 | 19.0 |
| No log | 2.0 | 124 | 2.5250 | 0.1361 | 0.0455 | 0.1141 | 0.1144 | 19.0 |
| No log | 3.0 | 186 | 2.4594 | 0.1459 | 0.0523 | 0.1202 | 0.1202 | 19.0 |
| No log | 4.0 | 248 | 2.4420 | 0.1461 | 0.0524 | 0.121 | 0.121 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.