modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 18:27:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 18:27:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
fx815/Reinforce-Pixelcopter-PLE-v0
|
fx815
| 2023-06-08T13:39:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T13:39:24Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.40 +/- 12.66
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
breadlicker45/music-rwkv-v4
|
breadlicker45
| 2023-06-08T13:29:28Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"dataset:breadlicker45/musenet-encoders-12k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T23:03:17Z |
---
datasets:
- breadlicker45/musenet-encoders-12k
---
|
hyllius/rl_learning
|
hyllius
| 2023-06-08T13:26:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-07T14:23:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.09 +/- 38.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fx1H/ppo-LunarLander-v2
|
fx1H
| 2023-06-08T13:25:55Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T13:25:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.86 +/- 13.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
genggui001/decapoda-research-llama-30b-megatron-states
|
genggui001
| 2023-06-08T13:16:32Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2023-06-08T10:46:57Z |
---
license: other
---
LLaMA-30B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
fx815/Reinforce-CartPole-v1
|
fx815
| 2023-06-08T13:06:00Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T13:05:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 483.20 +/- 35.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
EleutherAI/pythia-6.9b-deduped
|
EleutherAI
| 2023-06-08T13:05:19Z | 10,856 | 8 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/the_pile_deduplicated",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-02-25T17:56:57Z |
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-6.9B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-6.9B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-6.9B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-6.9B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-6.9B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-6.9B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-6.9B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-6.9B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-6.9B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
oskarhol/gpt-sw3-instruct-1.3b
|
oskarhol
| 2023-06-08T13:02:53Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T12:32:12Z |
---
license: bigscience-openrail-m
---
|
asure22/dbert_qa_model_070623
|
asure22
| 2023-06-08T12:58:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-08T02:38:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: dbert_qa_model_070623
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dbert_qa_model_070623
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5076 |
| 2.746 | 2.0 | 500 | 1.8158 |
| 2.746 | 3.0 | 750 | 1.7495 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
apipo/kepipo
|
apipo
| 2023-06-08T12:54:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T06:42:37Z |
---
license: creativeml-openrail-m
---
|
Parthi/a2c-AntBulletEnv-v0
|
Parthi
| 2023-06-08T12:45:48Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T12:35:35Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1896.18 +/- 226.38
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tmpusr/ppo-PyramidsRND
|
tmpusr
| 2023-06-08T12:43:42Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-08T12:43:38Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tmpusr/ppo-PyramidsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
wykonos/a2c-AntBulletEnv-v0
|
wykonos
| 2023-06-08T12:41:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-07T21:37:23Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1158.41 +/- 308.27
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
P3ps/bert-finetuned-cross-ner-v3
|
P3ps
| 2023-06-08T12:40:43Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T11:20:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-cross-ner-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cross-ner-v3
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1790
- Precision: 0.8305
- Recall: 0.8629
- F1: 0.8464
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2023 | 1.0 | 2607 | 0.1921 | 0.7785 | 0.8197 | 0.7985 | 0.9468 |
| 0.1244 | 2.0 | 5214 | 0.1740 | 0.8211 | 0.8541 | 0.8373 | 0.9547 |
| 0.0792 | 3.0 | 7821 | 0.1790 | 0.8305 | 0.8629 | 0.8464 | 0.9559 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sadFaceEmoji/gpt-neo-1.3B-poem
|
sadFaceEmoji
| 2023-06-08T12:33:12Z | 8 | 0 |
peft
|
[
"peft",
"text-generation",
"en",
"dataset:sadFaceEmoji/english-poems",
"region:us"
] |
text-generation
| 2023-06-08T12:32:13Z |
---
library_name: peft
datasets:
- sadFaceEmoji/english-poems
language:
- en
pipeline_tag: text-generation
---
|
kejolong/nanashe
|
kejolong
| 2023-06-08T12:02:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T12:01:05Z |
---
license: creativeml-openrail-m
---
|
MartinGui/distilbert-base-uncased-finetuned-imdb
|
MartinGui
| 2023-06-08T11:52:12Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-08T11:38:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4897 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ArturR01/segformer-b0-pytorch-bottles
|
ArturR01
| 2023-06-08T11:44:13Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2023-06-08T09:45:34Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-pytorch-bottles
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-pytorch-bottles
This model is a fine-tuned version of [ArturR01/segformer-b0-pytorch-bottles](https://huggingface.co/ArturR01/segformer-b0-pytorch-bottles) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Mean Iou: 0.4975
- Mean Accuracy: 0.9949
- Overall Accuracy: 0.9949
- Accuracy Unlabeled: nan
- Accuracy Bottle: 0.9949
- Iou Unlabeled: 0.0
- Iou Bottle: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Bottle | Iou Unlabeled | Iou Bottle |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:---------------:|:-------------:|:----------:|
| 0.0273 | 0.36 | 60 | 0.0259 | 0.4979 | 0.9959 | 0.9959 | nan | 0.9959 | 0.0 | 0.9959 |
| 0.0238 | 0.72 | 120 | 0.0215 | 0.4975 | 0.9949 | 0.9949 | nan | 0.9949 | 0.0 | 0.9949 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NYTK/PULI-GPT-2
|
NYTK
| 2023-06-08T11:40:03Z | 613 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"puli",
"hu",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-04T10:34:28Z |
---
language:
- hu
tags:
- text-generation
- puli
license: cc-by-nc-4.0
widget:
- text: Elmesélek egy történetet a nyelvtechnológiáról.
---
# PULI GPT-2
For further details, see [our demo site](https://juniper.nytud.hu/demo/gpt2).
- Hungarian GPT-2 model
- Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed)
- Dataset: 36.3 billion words
- Checkpoint: 500 000 steps
## Limitations
- max_seq_length = 1024
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-puli,
title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Hungary},
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
pages = {247--262}
}
```
## Usage
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('NYTK/PULI-GPT-2')
model = GPT2Model.from_pretrained('NYTK/PULI-GPT-2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Usage with pipeline
```python
from transformers import pipeline
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model="NYTK/PULI-GPT-2")
print(generator(prompt)[0]["generated_text"])
```
|
NYTK/PULI-BERT-Large
|
NYTK
| 2023-06-08T11:39:36Z | 299 | 3 |
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"fill-mask",
"puli",
"hu",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-01-09T14:51:30Z |
---
language:
- hu
tags:
- fill-mask
- puli
license: cc-by-nc-4.0
widget:
- text: Mesélek egy [MASK] az oroszlánról.
---
# PULI BERT-Large
For further details, see [our demo site](https://juniper.nytud.hu/demo/nlp).
- Hungarian BERT large model (MegatronBERT)
- Trained with Megatron-DeepSpeed [github](https://github.com/microsoft/Megatron-DeepSpeed)
- Dataset: 36.3 billion words
- Checkpoint: 1 500 000 steps
## Limitations
- max_seq_length = 1024
## Citation
If you use this model, please cite the following paper:
```
@inproceedings {yang-puli,
title = {Jönnek a nagyok! BERT-Large, GPT-2 és GPT-3 nyelvmodellek magyar nyelvre},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Hungary},
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik-Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti-Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
pages = {247--262}
}
```
## Usage
```python
from transformers import BertTokenizer, MegatronBertModel
tokenizer = BertTokenizer.from_pretrained('NYTK/PULI-BERT-Large')
model = MegatronBertModel.from_pretrained('NYTK/PULI-BERT-Large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt', do_lower_case=False)
output = model(**encoded_input)
```
|
Domo123/tanya-mama-ner
|
Domo123
| 2023-06-08T11:32:26Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T10:17:52Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tanya-mama-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tanya-mama-ner
This model is a fine-tuned version of [cahya/xlm-roberta-base-indonesian-NER](https://huggingface.co/cahya/xlm-roberta-base-indonesian-NER) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1480
- Precision: 0.8193
- Recall: 0.8765
- F1: 0.8470
- Accuracy: 0.9521
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 120 | 0.1731 | 0.7970 | 0.8644 | 0.8294 | 0.9441 |
| No log | 2.0 | 240 | 0.1480 | 0.8193 | 0.8765 | 0.8470 | 0.9521 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
surya111/finetuning-sentiment-model-3000-samples
|
surya111
| 2023-06-08T11:31:38Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T11:09:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zanchat/falcon-1b
|
zanchat
| 2023-06-08T11:11:17Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2306.01116",
"arxiv:2005.14165",
"arxiv:2108.12409",
"arxiv:2205.14135",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-08T11:03:25Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
# Falcon-RW-1B
**Falcon-RW-1B is a 1B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb). It is made available under the Apache 2.0 license.**
See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for more details.
RefinedWeb is a high-quality web dataset built by leveraging stringent filtering and large-scale deduplication. Falcon-RW-1B, trained on RefinedWeb only, matches or outperforms comparable models trained on curated data.
⚠️ This model is intended for use as a **research artifact**, to study the influence of training on web data alone. **If you are interested in state-of-the-art models, we recommend using Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b), both trained on >1,000 billion tokens.**
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-rw-1b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
# Model Card for Falcon-RW-1B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English;
- **License:** Apache 2.0.
### Model Source
- **Paper:** [https://arxiv.org/abs/2306.01116](https://arxiv.org/abs/2306.01116).
## Uses
### Direct Use
Research on large language models, specifically the influence of adequately filtered and deduplicated web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
Broadly speaking, we would recommend Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) for any use not directly related to research on web data pipelines.
## Bias, Risks, and Limitations
Falcon-RW-1B is trained on English data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-RW-1B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-rw-1b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-RW-1B was trained on 350B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset. The data was tokenized with the GPT-2 tokenizer.
### Training Procedure
Falcon-RW-1B was trained on 32 A100 40GB GPUs, using only data parallelism with ZeRO.
#### Training Hyperparameters
Hyperparameters were adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)).
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 2e-4 | 500M tokens warm-up, cosine decay to 2e-5 |
| Weight decay | 1e-1 | |
| Batch size | 512 | 4B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early December 2022 and took about six days.
## Evaluation
See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for in-depth evaluation.
## Technical Specifications
### Model Architecture and Objective
Falcon-RW-1B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), but uses ALiBi ([Ofir et al., 2021](https://arxiv.org/abs/2108.12409)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)).
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 24 | |
| `d_model` | 2048 | |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 50304 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-RW-1B was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-RW-1B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## Contact
falconllm@tii.ae
|
andrei-saceleanu/ro-offense-freematch
|
andrei-saceleanu
| 2023-06-08T11:00:44Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-06-08T11:00:21Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: ro-offense-freematch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ro-offense-freematch
This model is a fine-tuned version of [readerbench/RoBERT-base](https://huggingface.co/readerbench/RoBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
P3ps/bert-finetuned-cross-ner-v2
|
P3ps
| 2023-06-08T10:59:19Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T10:09:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-cross-ner-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-cross-ner-v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1808
- Precision: 0.8289
- Recall: 0.8613
- F1: 0.8448
- Accuracy: 0.9550
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2086 | 1.0 | 2607 | 0.1994 | 0.7700 | 0.8138 | 0.7913 | 0.9447 |
| 0.126 | 2.0 | 5214 | 0.1740 | 0.8148 | 0.8495 | 0.8318 | 0.9533 |
| 0.0819 | 3.0 | 7821 | 0.1808 | 0.8289 | 0.8613 | 0.8448 | 0.9550 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lll-yuh-lll/YuhMix
|
lll-yuh-lll
| 2023-06-08T10:55:15Z | 0 | 28 | null |
[
"stable-diffusion",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-24T01:39:47Z |
---
license: creativeml-openrail-m
language:
- ja
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
## 【概要】
『YuhMix』は『Counterfeit』をベースにして他のモデルを
階層マージしたモデルです。
『Counterfeit』の優れた構図、ポーズといった表現力を活かして
**絵柄のみを変更**。他の部分は極力影響が出ないよう調整しました。
ネガティブTIの推奨は『EasyNegativeV2』です。
VAE、Steps、CGF Scale、Sampler、Upscalerの推奨はありません。
各自お好みで設定して下さい。
**マージ元モデルの作者様に多大なる感謝を**。
このモデルとマージして欲しいという要望があれば**追加を検討**します。
Twitter: [@lll_yuh_lll](https://twitter.com/lll_yuh_lll)
***
## 【マージ元モデル】
**YuhMix_A1:アニメ塗り**
Counterfeit-V3.0 + ambientmix
**YuhMix_P1:ややアニメ塗り**
Counterfeit-V3.0 + Pika's New Generation v2.0
**YuhMix_L1:フラット**
Counterfeit-V3.0 + 7th_anime_v3_B
**YuhMix_C1:フラット+可愛い**
Counterfeit-V3.0 + CuteYukiMix v3.0
***
## 【YuhMix_A1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2530832888, Size: 512x768, Model hash: 5b0478a78a, Model: YuhMix_A1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
1 girl, adventurer, has weapon, action
Negative prompt: EasyNegativeV2, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1229935043, Size: 512x768, Model hash: 5b0478a78a, Model: YuhMix_A1_fp16, Denoising strength: 0.45, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 20, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_P1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 3471603083, Size: 512x768, Model hash: a8c732dd6d, Model: YuhMix_P1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
2D, 1 girl, smile, school uniform, shinjuku, night scene, magic circle, action
Negative prompt: EasyNegativeV2, 3D, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1799350649, Size: 512x768, Model hash: a8c732dd6d, Model: YuhMix_P1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_L1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 835610278, Size: 512x768, Model hash: 23eb8adb20, Model: YuhMix_L1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
2D, 1 girl, smile, idol costume, shouting into a microphone, dancing, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 1192103882, Size: 512x768, Model hash: 23eb8adb20, Model: YuhMix_L1_fp16, Denoising strength: 0.55, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```
## 【YuhMix_C1】

```
2D, 1 girl, flying in the sky, wide shot
Negative prompt: EasyNegativeV2, 3D, watermark, wing, feather, airplane, aircraft, bird
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 986981883, Size: 512x768, Model hash: 9daf68fee9, Model: YuhMix_C1_fp16, Denoising strength: 0.5, Clip skip: 2, Version: v1.2.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp
```

```
holy sword, cute girl
Negative prompt: EasyNegativeV2, 3D, watermark, animal ears
Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7, Seed: 2760921822, Size: 512x768, Model hash: 9daf68fee9, Model: YuhMix_C1_fp16, Denoising strength: 0.5, Clip skip: 2, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4x-AnimeSharp, Version: v1.3.2
```
|
genggui001/decapoda-research-llama-13b-megatron-states
|
genggui001
| 2023-06-08T10:49:08Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-06-08T10:41:45Z |
---
license: other
---
LLaMA-13B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
MJa6/gpt2-wikitext2
|
MJa6
| 2023-06-08T10:41:53Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T10:39:11Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.5813
- eval_runtime: 21.0482
- eval_samples_per_second: 91.884
- eval_steps_per_second: 11.497
- epoch: 0.08
- step: 184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
fatimas/gpt2-wikitext2
|
fatimas
| 2023-06-08T10:41:45Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T10:37:41Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 7.0633
- eval_runtime: 20.711
- eval_samples_per_second: 93.38
- eval_steps_per_second: 11.685
- epoch: 0.22
- step: 488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Khushnur/t5-small-end2end-questions-generation_squad_aug_
|
Khushnur
| 2023-06-08T10:37:27Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T09:55:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-end2end-questions-generation_squad_aug_
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-end2end-questions-generation_squad_aug_
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
haddadalwi/bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
|
haddadalwi
| 2023-06-08T10:32:51Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-01-27T13:49:20Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-squad2-finetuned-islamic-squad
This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.3 | 100 | 0.3653 |
| No log | 2.6 | 200 | 0.4152 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
PT-10/flan-t5-small-samsum
|
PT-10
| 2023-06-08T10:24:52Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T09:57:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: flan-t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Ybhav14/en_Spacy_Custom_ner2
|
Ybhav14
| 2023-06-08T10:08:55Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2023-06-08T10:04:04Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_Spacy_Custom_ner2
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9911054638
- name: NER Recall
type: recall
value: 0.9961685824
- name: NER F Score
type: f_score
value: 0.9936305732
---
| Feature | Description |
| --- | --- |
| **Name** | `en_Spacy_Custom_ner2` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.3,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (14 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BOOK`, `COMODITY`, `CONTAINER COUNT`, `CONTAINER SIZE`, `CONTAINER SIZE-COUNT`, `DESTINATION`, `ENQUIRY`, `HELP`, `INCOTERM`, `KYC`, `ORIGIN`, `SEARCH RATES`, `SHIP`, `SHIPMENT TYPE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 99.36 |
| `ENTS_P` | 99.11 |
| `ENTS_R` | 99.62 |
| `TOK2VEC_LOSS` | 10283.83 |
| `NER_LOSS` | 72242.77 |
|
mnavas/beto-finetuned-token-reqadjinsiders
|
mnavas
| 2023-06-08T10:06:54Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-07T14:29:47Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-token-reqadjinsiders
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-token-reqadjinsiders
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7385
- Precision: 0.0833
- Recall: 0.1
- F1: 0.0909
- Accuracy: 0.9092
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.5869 | 1.0 | 10 | 0.4001 | 0.0 | 0.0 | 0.0 | 0.8196 |
| 0.2986 | 2.0 | 20 | 0.4095 | 0.0 | 0.0 | 0.0 | 0.8876 |
| 0.2215 | 3.0 | 30 | 0.3336 | 0.0 | 0.0 | 0.0 | 0.8643 |
| 0.1356 | 4.0 | 40 | 0.3362 | 0.0 | 0.0 | 0.0 | 0.8954 |
| 0.0717 | 5.0 | 50 | 0.3489 | 0.0 | 0.0 | 0.0 | 0.8987 |
| 0.0424 | 6.0 | 60 | 0.4066 | 0.0 | 0.0 | 0.0 | 0.9044 |
| 0.0301 | 7.0 | 70 | 0.3172 | 0.0741 | 0.1 | 0.0851 | 0.9227 |
| 0.0191 | 8.0 | 80 | 0.5007 | 0.0435 | 0.05 | 0.0465 | 0.9050 |
| 0.0155 | 9.0 | 90 | 0.5146 | 0.1 | 0.05 | 0.0667 | 0.9133 |
| 0.0174 | 10.0 | 100 | 0.3293 | 0.0465 | 0.1 | 0.0635 | 0.9122 |
| 0.0113 | 11.0 | 110 | 0.4793 | 0.0714 | 0.1 | 0.0833 | 0.9179 |
| 0.0136 | 12.0 | 120 | 0.4758 | 0.1905 | 0.2 | 0.1951 | 0.9259 |
| 0.0095 | 13.0 | 130 | 0.3407 | 0.0571 | 0.1 | 0.0727 | 0.9231 |
| 0.0113 | 14.0 | 140 | 0.3864 | 0.0833 | 0.1 | 0.0909 | 0.9076 |
| 0.0036 | 15.0 | 150 | 0.4718 | 0.0741 | 0.1 | 0.0851 | 0.9096 |
| 0.0036 | 16.0 | 160 | 0.5261 | 0.0882 | 0.15 | 0.1111 | 0.8965 |
| 0.0021 | 17.0 | 170 | 0.6655 | 0.0417 | 0.05 | 0.0455 | 0.8902 |
| 0.0033 | 18.0 | 180 | 0.5417 | 0.1212 | 0.2 | 0.1509 | 0.9054 |
| 0.0023 | 19.0 | 190 | 0.6521 | 0.1111 | 0.1 | 0.1053 | 0.9083 |
| 0.0021 | 20.0 | 200 | 0.4450 | 0.0909 | 0.15 | 0.1132 | 0.9214 |
| 0.0038 | 21.0 | 210 | 0.5652 | 0.1429 | 0.1 | 0.1176 | 0.9194 |
| 0.0088 | 22.0 | 220 | 0.5791 | 0.0833 | 0.1 | 0.0909 | 0.8874 |
| 0.0036 | 23.0 | 230 | 0.4630 | 0.1034 | 0.15 | 0.1224 | 0.9063 |
| 0.003 | 24.0 | 240 | 0.5352 | 0.12 | 0.15 | 0.1333 | 0.9144 |
| 0.0028 | 25.0 | 250 | 0.5582 | 0.1111 | 0.1 | 0.1053 | 0.9107 |
| 0.0016 | 26.0 | 260 | 0.6038 | 0.075 | 0.15 | 0.1 | 0.9009 |
| 0.0024 | 27.0 | 270 | 0.5846 | 0.0909 | 0.1 | 0.0952 | 0.9124 |
| 0.0011 | 28.0 | 280 | 0.5600 | 0.125 | 0.15 | 0.1364 | 0.8993 |
| 0.0007 | 29.0 | 290 | 0.5614 | 0.0938 | 0.15 | 0.1154 | 0.8954 |
| 0.0006 | 30.0 | 300 | 0.5480 | 0.1176 | 0.1 | 0.1081 | 0.9129 |
| 0.006 | 31.0 | 310 | 0.5170 | 0.1176 | 0.2 | 0.1481 | 0.9039 |
| 0.0012 | 32.0 | 320 | 0.5699 | 0.0769 | 0.05 | 0.0606 | 0.8961 |
| 0.0004 | 33.0 | 330 | 0.6046 | 0.0476 | 0.05 | 0.0488 | 0.8928 |
| 0.0002 | 34.0 | 340 | 0.6708 | 0.0556 | 0.05 | 0.0526 | 0.8919 |
| 0.0001 | 35.0 | 350 | 0.7087 | 0.0435 | 0.05 | 0.0465 | 0.8948 |
| 0.0002 | 36.0 | 360 | 0.7225 | 0.05 | 0.05 | 0.0500 | 0.8976 |
| 0.0 | 37.0 | 370 | 0.7294 | 0.0435 | 0.05 | 0.0465 | 0.8985 |
| 0.0003 | 38.0 | 380 | 0.7182 | 0.0370 | 0.05 | 0.0426 | 0.9026 |
| 0.0002 | 39.0 | 390 | 0.7298 | 0.05 | 0.05 | 0.0500 | 0.9061 |
| 0.0003 | 40.0 | 400 | 0.7313 | 0.0588 | 0.05 | 0.0541 | 0.9068 |
| 0.0 | 41.0 | 410 | 0.7412 | 0.0526 | 0.05 | 0.0513 | 0.9068 |
| 0.0 | 42.0 | 420 | 0.7447 | 0.0556 | 0.05 | 0.0526 | 0.9068 |
| 0.0 | 43.0 | 430 | 0.7465 | 0.0588 | 0.05 | 0.0541 | 0.9076 |
| 0.0 | 44.0 | 440 | 0.7500 | 0.0455 | 0.05 | 0.0476 | 0.9070 |
| 0.0 | 45.0 | 450 | 0.7525 | 0.0435 | 0.05 | 0.0465 | 0.9065 |
| 0.0002 | 46.0 | 460 | 0.7540 | 0.0476 | 0.05 | 0.0488 | 0.9068 |
| 0.0003 | 47.0 | 470 | 0.7422 | 0.0455 | 0.05 | 0.0476 | 0.9068 |
| 0.0 | 48.0 | 480 | 0.7378 | 0.0435 | 0.05 | 0.0465 | 0.9070 |
| 0.0 | 49.0 | 490 | 0.7384 | 0.0417 | 0.05 | 0.0455 | 0.9068 |
| 0.0 | 50.0 | 500 | 0.7414 | 0.0455 | 0.05 | 0.0476 | 0.9070 |
| 0.0 | 51.0 | 510 | 0.7446 | 0.0455 | 0.05 | 0.0476 | 0.9070 |
| 0.0 | 52.0 | 520 | 0.7432 | 0.0385 | 0.05 | 0.0435 | 0.9061 |
| 0.0003 | 53.0 | 530 | 0.7474 | 0.0417 | 0.05 | 0.0455 | 0.9065 |
| 0.0002 | 54.0 | 540 | 0.7513 | 0.04 | 0.05 | 0.0444 | 0.9068 |
| 0.0 | 55.0 | 550 | 0.7505 | 0.0455 | 0.05 | 0.0476 | 0.9076 |
| 0.0003 | 56.0 | 560 | 0.7419 | 0.0417 | 0.05 | 0.0455 | 0.9072 |
| 0.0 | 57.0 | 570 | 0.7352 | 0.04 | 0.05 | 0.0444 | 0.9070 |
| 0.0 | 58.0 | 580 | 0.7330 | 0.04 | 0.05 | 0.0444 | 0.9068 |
| 0.0 | 59.0 | 590 | 0.7330 | 0.04 | 0.05 | 0.0444 | 0.9063 |
| 0.0 | 60.0 | 600 | 0.7343 | 0.04 | 0.05 | 0.0444 | 0.9061 |
| 0.0 | 61.0 | 610 | 0.7370 | 0.0385 | 0.05 | 0.0435 | 0.9063 |
| 0.0003 | 62.0 | 620 | 0.7303 | 0.04 | 0.05 | 0.0444 | 0.9063 |
| 0.0 | 63.0 | 630 | 0.7275 | 0.04 | 0.05 | 0.0444 | 0.9059 |
| 0.0 | 64.0 | 640 | 0.7283 | 0.04 | 0.05 | 0.0444 | 0.9057 |
| 0.0 | 65.0 | 650 | 0.7312 | 0.04 | 0.05 | 0.0444 | 0.9059 |
| 0.0002 | 66.0 | 660 | 0.7243 | 0.0345 | 0.05 | 0.0408 | 0.9074 |
| 0.0001 | 67.0 | 670 | 0.7195 | 0.05 | 0.05 | 0.0500 | 0.9081 |
| 0.0001 | 68.0 | 680 | 0.7350 | 0.0714 | 0.05 | 0.0588 | 0.9061 |
| 0.0001 | 69.0 | 690 | 0.7750 | 0.0625 | 0.05 | 0.0556 | 0.9061 |
| 0.0005 | 70.0 | 700 | 0.6559 | 0.0571 | 0.1 | 0.0727 | 0.9031 |
| 0.0003 | 71.0 | 710 | 0.6497 | 0.0385 | 0.05 | 0.0435 | 0.9131 |
| 0.0002 | 72.0 | 720 | 0.6621 | 0.0588 | 0.05 | 0.0541 | 0.9133 |
| 0.0007 | 73.0 | 730 | 0.6093 | 0.0741 | 0.1 | 0.0851 | 0.9126 |
| 0.0004 | 74.0 | 740 | 0.6184 | 0.0909 | 0.1 | 0.0952 | 0.9135 |
| 0.0005 | 75.0 | 750 | 0.5911 | 0.0952 | 0.1 | 0.0976 | 0.9142 |
| 0.0001 | 76.0 | 760 | 0.5567 | 0.0625 | 0.1 | 0.0769 | 0.9102 |
| 0.0002 | 77.0 | 770 | 0.5670 | 0.0571 | 0.1 | 0.0727 | 0.9096 |
| 0.0002 | 78.0 | 780 | 0.5940 | 0.0588 | 0.1 | 0.0741 | 0.9129 |
| 0.0001 | 79.0 | 790 | 0.6299 | 0.0455 | 0.05 | 0.0476 | 0.9139 |
| 0.0 | 80.0 | 800 | 0.6449 | 0.0455 | 0.05 | 0.0476 | 0.9135 |
| 0.0 | 81.0 | 810 | 0.6519 | 0.0417 | 0.05 | 0.0455 | 0.9131 |
| 0.0002 | 82.0 | 820 | 0.6378 | 0.0690 | 0.1 | 0.0816 | 0.9118 |
| 0.0 | 83.0 | 830 | 0.6267 | 0.0588 | 0.1 | 0.0741 | 0.9046 |
| 0.0004 | 84.0 | 840 | 0.6174 | 0.0625 | 0.1 | 0.0769 | 0.9065 |
| 0.0002 | 85.0 | 850 | 0.6174 | 0.0714 | 0.1 | 0.0833 | 0.9124 |
| 0.0001 | 86.0 | 860 | 0.6225 | 0.0909 | 0.1 | 0.0952 | 0.9135 |
| 0.0001 | 87.0 | 870 | 0.6384 | 0.0526 | 0.05 | 0.0513 | 0.9144 |
| 0.0 | 88.0 | 880 | 0.6604 | 0.0625 | 0.05 | 0.0556 | 0.9139 |
| 0.0 | 89.0 | 890 | 0.6694 | 0.0625 | 0.05 | 0.0556 | 0.9137 |
| 0.0 | 90.0 | 900 | 0.6711 | 0.0588 | 0.05 | 0.0541 | 0.9133 |
| 0.0001 | 91.0 | 910 | 0.6526 | 0.0435 | 0.05 | 0.0465 | 0.9124 |
| 0.0 | 92.0 | 920 | 0.6450 | 0.0417 | 0.05 | 0.0455 | 0.9124 |
| 0.0002 | 93.0 | 930 | 0.6504 | 0.0417 | 0.05 | 0.0455 | 0.9113 |
| 0.0 | 94.0 | 940 | 0.6711 | 0.0455 | 0.05 | 0.0476 | 0.9118 |
| 0.0 | 95.0 | 950 | 0.6789 | 0.0417 | 0.05 | 0.0455 | 0.9118 |
| 0.0 | 96.0 | 960 | 0.6828 | 0.0476 | 0.05 | 0.0488 | 0.9111 |
| 0.0 | 97.0 | 970 | 0.6863 | 0.0455 | 0.05 | 0.0476 | 0.9111 |
| 0.0001 | 98.0 | 980 | 0.6811 | 0.04 | 0.05 | 0.0444 | 0.9115 |
| 0.0 | 99.0 | 990 | 0.6787 | 0.0833 | 0.1 | 0.0909 | 0.9133 |
| 0.0001 | 100.0 | 1000 | 0.6914 | 0.0476 | 0.05 | 0.0488 | 0.9120 |
| 0.0 | 101.0 | 1010 | 0.7028 | 0.0588 | 0.05 | 0.0541 | 0.9118 |
| 0.0 | 102.0 | 1020 | 0.7089 | 0.0556 | 0.05 | 0.0526 | 0.9109 |
| 0.0 | 103.0 | 1030 | 0.7152 | 0.0588 | 0.05 | 0.0541 | 0.9111 |
| 0.0 | 104.0 | 1040 | 0.7151 | 0.0625 | 0.05 | 0.0556 | 0.9107 |
| 0.0 | 105.0 | 1050 | 0.7126 | 0.0556 | 0.05 | 0.0526 | 0.9105 |
| 0.0 | 106.0 | 1060 | 0.7065 | 0.0526 | 0.05 | 0.0513 | 0.9094 |
| 0.0002 | 107.0 | 1070 | 0.7154 | 0.05 | 0.05 | 0.0500 | 0.9089 |
| 0.0001 | 108.0 | 1080 | 0.6777 | 0.12 | 0.15 | 0.1333 | 0.9078 |
| 0.0 | 109.0 | 1090 | 0.6683 | 0.1 | 0.15 | 0.12 | 0.9074 |
| 0.0001 | 110.0 | 1100 | 0.6622 | 0.0909 | 0.15 | 0.1132 | 0.9070 |
| 0.0003 | 111.0 | 1110 | 0.6750 | 0.08 | 0.1 | 0.0889 | 0.9057 |
| 0.0001 | 112.0 | 1120 | 0.7000 | 0.1053 | 0.1 | 0.1026 | 0.9061 |
| 0.0001 | 113.0 | 1130 | 0.7136 | 0.1053 | 0.1 | 0.1026 | 0.9046 |
| 0.0001 | 114.0 | 1140 | 0.7234 | 0.1 | 0.1 | 0.1000 | 0.9037 |
| 0.0 | 115.0 | 1150 | 0.7643 | 0.0870 | 0.1 | 0.0930 | 0.8998 |
| 0.0001 | 116.0 | 1160 | 0.7801 | 0.0769 | 0.1 | 0.0870 | 0.9002 |
| 0.0 | 117.0 | 1170 | 0.7872 | 0.0769 | 0.1 | 0.0870 | 0.9009 |
| 0.0003 | 118.0 | 1180 | 0.7597 | 0.0690 | 0.1 | 0.0816 | 0.8983 |
| 0.0002 | 119.0 | 1190 | 0.7564 | 0.0606 | 0.1 | 0.0755 | 0.8989 |
| 0.0 | 120.0 | 1200 | 0.7558 | 0.0606 | 0.1 | 0.0755 | 0.8998 |
| 0.0 | 121.0 | 1210 | 0.7566 | 0.0625 | 0.1 | 0.0769 | 0.9002 |
| 0.0 | 122.0 | 1220 | 0.7579 | 0.0606 | 0.1 | 0.0755 | 0.8991 |
| 0.0 | 123.0 | 1230 | 0.7603 | 0.0606 | 0.1 | 0.0755 | 0.8989 |
| 0.0 | 124.0 | 1240 | 0.7626 | 0.0667 | 0.1 | 0.08 | 0.8980 |
| 0.0 | 125.0 | 1250 | 0.7645 | 0.0690 | 0.1 | 0.0816 | 0.8980 |
| 0.0 | 126.0 | 1260 | 0.7666 | 0.0625 | 0.1 | 0.0769 | 0.8978 |
| 0.0 | 127.0 | 1270 | 0.7689 | 0.0645 | 0.1 | 0.0784 | 0.8976 |
| 0.0 | 128.0 | 1280 | 0.7707 | 0.0645 | 0.1 | 0.0784 | 0.8974 |
| 0.0 | 129.0 | 1290 | 0.7718 | 0.0645 | 0.1 | 0.0784 | 0.8967 |
| 0.0 | 130.0 | 1300 | 0.7730 | 0.0606 | 0.1 | 0.0755 | 0.8976 |
| 0.0 | 131.0 | 1310 | 0.7738 | 0.0606 | 0.1 | 0.0755 | 0.8989 |
| 0.0003 | 132.0 | 1320 | 0.7744 | 0.0588 | 0.1 | 0.0741 | 0.9002 |
| 0.0 | 133.0 | 1330 | 0.7762 | 0.0606 | 0.1 | 0.0755 | 0.9013 |
| 0.0 | 134.0 | 1340 | 0.7784 | 0.0606 | 0.1 | 0.0755 | 0.9037 |
| 0.0 | 135.0 | 1350 | 0.7798 | 0.0667 | 0.1 | 0.08 | 0.9048 |
| 0.0002 | 136.0 | 1360 | 0.7357 | 0.0588 | 0.1 | 0.0741 | 0.9002 |
| 0.0002 | 137.0 | 1370 | 0.6891 | 0.08 | 0.1 | 0.0889 | 0.9 |
| 0.0001 | 138.0 | 1380 | 0.6732 | 0.0769 | 0.1 | 0.0870 | 0.9065 |
| 0.0001 | 139.0 | 1390 | 0.6808 | 0.0870 | 0.1 | 0.0930 | 0.9096 |
| 0.0 | 140.0 | 1400 | 0.6845 | 0.0833 | 0.1 | 0.0909 | 0.9098 |
| 0.0 | 141.0 | 1410 | 0.6880 | 0.0870 | 0.1 | 0.0930 | 0.9096 |
| 0.0 | 142.0 | 1420 | 0.6915 | 0.0870 | 0.1 | 0.0930 | 0.9096 |
| 0.0 | 143.0 | 1430 | 0.6945 | 0.08 | 0.1 | 0.0889 | 0.9096 |
| 0.0 | 144.0 | 1440 | 0.6966 | 0.0769 | 0.1 | 0.0870 | 0.9094 |
| 0.0 | 145.0 | 1450 | 0.6986 | 0.0909 | 0.1 | 0.0952 | 0.9109 |
| 0.0 | 146.0 | 1460 | 0.7015 | 0.0952 | 0.1 | 0.0976 | 0.9109 |
| 0.0 | 147.0 | 1470 | 0.7036 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 148.0 | 1480 | 0.7054 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 149.0 | 1490 | 0.7078 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 150.0 | 1500 | 0.7091 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 151.0 | 1510 | 0.7111 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 152.0 | 1520 | 0.7127 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 153.0 | 1530 | 0.7141 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 154.0 | 1540 | 0.7160 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 155.0 | 1550 | 0.7191 | 0.1053 | 0.1 | 0.1026 | 0.9109 |
| 0.0 | 156.0 | 1560 | 0.7205 | 0.1053 | 0.1 | 0.1026 | 0.9109 |
| 0.0 | 157.0 | 1570 | 0.7217 | 0.1053 | 0.1 | 0.1026 | 0.9109 |
| 0.0 | 158.0 | 1580 | 0.7225 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 159.0 | 1590 | 0.7231 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 160.0 | 1600 | 0.7238 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 161.0 | 1610 | 0.7245 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 162.0 | 1620 | 0.7252 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 163.0 | 1630 | 0.7258 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 164.0 | 1640 | 0.7261 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 165.0 | 1650 | 0.7266 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 166.0 | 1660 | 0.7273 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 167.0 | 1670 | 0.7278 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 168.0 | 1680 | 0.7286 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 169.0 | 1690 | 0.7295 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 170.0 | 1700 | 0.7303 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 171.0 | 1710 | 0.7310 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0 | 172.0 | 1720 | 0.7316 | 0.1 | 0.1 | 0.1000 | 0.9113 |
| 0.0002 | 173.0 | 1730 | 0.7248 | 0.1 | 0.1 | 0.1000 | 0.9107 |
| 0.0 | 174.0 | 1740 | 0.7180 | 0.0909 | 0.1 | 0.0952 | 0.9096 |
| 0.0003 | 175.0 | 1750 | 0.7154 | 0.0909 | 0.1 | 0.0952 | 0.9096 |
| 0.0 | 176.0 | 1760 | 0.7161 | 0.0909 | 0.1 | 0.0952 | 0.9094 |
| 0.0 | 177.0 | 1770 | 0.7251 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 178.0 | 1780 | 0.7282 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 179.0 | 1790 | 0.7297 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 180.0 | 1800 | 0.7304 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 181.0 | 1810 | 0.7308 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 182.0 | 1820 | 0.7315 | 0.0870 | 0.1 | 0.0930 | 0.9094 |
| 0.0 | 183.0 | 1830 | 0.7334 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 184.0 | 1840 | 0.7345 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 185.0 | 1850 | 0.7349 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 186.0 | 1860 | 0.7353 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 187.0 | 1870 | 0.7356 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 188.0 | 1880 | 0.7360 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 189.0 | 1890 | 0.7365 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 190.0 | 1900 | 0.7368 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 191.0 | 1910 | 0.7370 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 192.0 | 1920 | 0.7374 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 193.0 | 1930 | 0.7375 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 194.0 | 1940 | 0.7378 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 195.0 | 1950 | 0.7379 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 196.0 | 1960 | 0.7378 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 197.0 | 1970 | 0.7381 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 198.0 | 1980 | 0.7384 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 199.0 | 1990 | 0.7385 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
| 0.0 | 200.0 | 2000 | 0.7385 | 0.0833 | 0.1 | 0.0909 | 0.9092 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
VaianiLorenzo/ViPER-VTF
|
VaianiLorenzo
| 2023-06-08T10:06:49Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-06-08T09:21:46Z |
# ViPER-VTF
## (Vision Text FAU)
This repository contains the checkpoints for the ViPER model.
It is a Perceiver-based model finetuned on the concatenation of visual, textual and FAU-related features.
For more information on how to use this model please refer to the following [repository](https://github.com/VaianiLorenzo/ViPER)
If you find this useful please cite:
```
@inproceedings{vaiani2022viper,
title={ViPER: Video-based Perceiver for Emotion Recognition},
author={Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
booktitle={Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge},
pages={67--73},
year={2022}
}
```
For any other question feel free to contact me at lorenzo.vaiani@polito.it
|
DREX-Institute/potat1.pth
|
DREX-Institute
| 2023-06-08T10:05:01Z | 1 | 6 |
diffusers
|
[
"diffusers",
"text-to-video",
"diffusers:TextToVideoSDPipeline",
"region:us"
] |
text-to-video
| 2023-06-06T09:40:06Z |
---
library_name: diffusers
pipeline_tag: text-to-video
---
---
The original data is
https://huggingface.co/camenduru/potat1/tree/main
This model is train by camenduru,@ camenduru
This is a .pth convermodel from camenduru potat1
potat1.pth is renamed to text2video_pytorch_model.pth so that can directly use in modelscope
---the .pth model is conver by @ camenduru potat1
---thanks to camenduru to make T2V have more impossible futures
Approved by the original author @ camenduru
https://twitter.com/camenduru
https://discord.com/invite/k5BwmmvJJU
|
csukuangfj/visionfive2-sd-card-img
|
csukuangfj
| 2023-06-08T10:02:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T09:47:58Z |
# Introduction
This repo contains `sdcard.img` built from
https://github.com/starfive-tech/VisionFive2
on Ubuntu 23.04 using the following commands:
```bash
sudo apt update
sudo apt-get install build-essential g++ git autoconf \
automake autotools-dev texinfo bison xxd curl flex gawk \
gdisk gperf libgmp-dev libmpfr-dev libmpc-dev libz-dev \
libssl-dev libncurses-dev libtool patchutils python3 python3-dev screen \
texinfo unzip zlib1g-dev libyaml-dev wget cpio bc dosfstools \
mtools device-tree-compiler libglib2.0-dev libpixman-1-dev kpartx
sudo apt-get install git-lfs
cd ~/
git clone https://github.com/starfive-tech/VisionFive2.git
cd VisionFive2
git checkout JH7110_VisionFive2_devel
git submodule update --init --recursive
cd buildroot && git checkout --track origin/JH7110_VisionFive2_devel && cd ..
cd u-boot && git checkout --track origin/JH7110_VisionFive2_devel && cd ..
cd linux && git checkout --track origin/JH7110_VisionFive2_devel && cd ..
cd opensbi && git checkout master && cd ..
cd soft_3rdpart && git checkout JH7110_VisionFive2_devel && cd ..
cd ~/VisionFive2/soft_3rdpart/IMG_GPU/out
git lfs pull
cd ~/VisionFive2
make -j$(nproc)
make buildroot_rootfs -j$(nproc)
make img
```
The generated file is `work/sdcard.img`, which takes me a day to build it.
The username for the image is `root` and the password is `starfive`.
|
VaianiLorenzo/ViPER-VAT
|
VaianiLorenzo
| 2023-06-08T09:59:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T09:21:07Z |
# ViPER-VAT
## (Vision Audio Text)
This repository contains the checkpoints for the ViPER model.
It is a Perceiver-based model finetuned on the concatenation of visual, acoustic and textual features.
For more information on how to use this model please refer to the following [repository](https://github.com/VaianiLorenzo/ViPER)
If you find this useful please cite:
```
@inproceedings{vaiani2022viper,
title={ViPER: Video-based Perceiver for Emotion Recognition},
author={Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
booktitle={Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge},
pages={67--73},
year={2022}
}
```
For any other question feel free to contact me at lorenzo.vaiani@polito.it
|
VaianiLorenzo/ViPER-VF
|
VaianiLorenzo
| 2023-06-08T09:58:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T09:20:52Z |
# ViPER-VF
## (Vision FAU)
This repository contains the checkpoints for the ViPER model.
It is a Perceiver-based model finetuned on the concatenation of visual and FAU-related features.
For more information on how to use this model please refer to the following [repository](https://github.com/VaianiLorenzo/ViPER)
If you find this useful please cite:
```
@inproceedings{vaiani2022viper,
title={ViPER: Video-based Perceiver for Emotion Recognition},
author={Vaiani, Lorenzo and La Quatra, Moreno and Cagliero, Luca and Garza, Paolo},
booktitle={Proceedings of the 3rd International on Multimodal Sentiment Analysis Workshop and Challenge},
pages={67--73},
year={2022}
}
```
For any other question feel free to contact me at lorenzo.vaiani@polito.it
|
mfaiq2307/faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
|
mfaiq2307
| 2023-06-08T09:50:38Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-08T08:01:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_11_0
metrics:
- wer
model-index:
- name: faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_11_0
type: common_voice_11_0
config: id
split: test
args: id
metrics:
- name: Wer
type: wer
value: 0.43878832999860407
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# faiq-wav2vec2-large-xlsr-indo-demo-v100-batch64
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3267
- Wer: 0.4388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2549 | 11.59 | 400 | 0.6715 | 0.7735 |
| 0.3726 | 23.19 | 800 | 0.3267 | 0.4388 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.6.1
- Tokenizers 0.13.3
|
rs224/bloom-1b7-4bit
|
rs224
| 2023-06-08T09:50:20Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-08T09:50:20Z |
---
license: bigscience-openrail-m
---
|
TMElyralab/lyraBELLE
|
TMElyralab
| 2023-06-08T09:37:50Z | 0 | 3 | null |
[
"LLM",
"BELLE",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T07:02:03Z |
---
license: apache-2.0
language:
- en
tags:
- LLM
- BELLE
---
## Model Card for lyraBELLE
lyraBELLE is currently the **fastest BELLE model** available. To the best of our knowledge, it is the **first accelerated version of BELLE**.
The inference speed of lyraBELLE has achieved **3.3x+** acceleration upon the original version.
Among its main features are:
- weights: the original BELLE-7B-2M weights released by BelleGroup.
- device: Nvidia Ampere architechture or newer (e.g., A100)
Note that:
**Some interface/code were set for future uses(see demo below).**
- **int8 mode**: not supported yet, please always set it at 0
- **data type**: only `fp16` available.
## Speed
### test environment
- device: Nvidia A100 40G
- warmup: 10 rounds
- percision: fp16
- batch size: 64
- language: Chinese, keep the same in a batch.
- do_sample: True, the model will generate slightly different answsers to the same questions.
|version|speed|
|:-:|:-:|
|original|826.34 tokens/sec|
|lyraBELLE|2701.71 tokens/sec|
## Model Sources
- **Repository:** [https://huggingface.co/BelleGroup/BELLE-7B-2M?clone=true]
## Environment
- **docker image available** at [https://hub.docker.com/repository/docker/bigmoyan/lyrallm/general], pull image by:
```
docker pull bigmoyan/lyrallm:v0.1
```
## Uses
```python
from lyraBelle import LyraBelle
data_type = "fp16"
prompts = "今天天气大概 25度,有点小雨,吹着风,我想去户外散步,应该穿什么样的衣服裤子鞋子搭配。"
model_dir = "./model"
model_name = "1-gpu-fp16.h5"
max_output_length = 512
# int8 mode not supported, data_type only support fp16
model = LyraBelle(model_dir, model_name, data_type, 0)
output_texts = model.generate(prompts, output_length=max_output_length,top_k=30, top_p=0.85, temperature=0.35, repetition_penalty=1.2, do_sample=True)
print(output_texts)
```
## Demo output
### input
今天天气大概 25度,有点小雨,吹着风,我想去户外散步,应该穿什么样的衣服裤子鞋子搭配。
### output
建议穿着一件轻便的衬衫或T恤、一条牛仔裤和一双运动鞋或休闲鞋。如果下雨了可以带上一把伞。
## Citation
``` bibtex
@Misc{lyraBELLE2023,
author = {Kangjian Wu, Zhengtao Wang, Bin Wu},
title = {lyraBELLE: Accelerating BELLE by 3x+},
howpublished = {\url{https://huggingface.co/TMElyralab/lyraBELLE},
year = {2023}
}
```
## Report bug
- start a discussion to report any bugs!--> https://huggingface.co/TMElyralab/lyraBELLE/discussions
- report bug with a `[bug]` mark in the title.
|
bigcode/gpt_bigcode-santacoder
|
bigcode
| 2023-06-08T09:20:22Z | 43,656 | 25 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_bigcode",
"text-generation",
"code",
"dataset:bigcode/the-stack",
"license:openrail",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-04-06T01:35:04Z |
---
license: openrail
datasets:
- bigcode/the-stack
language:
- code
programming_language:
- Java
- JavaScript
- Python
pipeline_tag: text-generation
inference: false
model-index:
- name: SantaCoder
results:
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.18
verified: false
- name: pass@10
type: pass@10
value: 0.29
verified: false
- name: pass@100
type: pass@100
value: 0.49
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Python)
metrics:
- name: pass@1
type: pass@1
value: 0.35
verified: false
- name: pass@10
type: pass@10
value: 0.58
verified: false
- name: pass@100
type: pass@100
value: 0.77
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 0.16
verified: false
- name: pass@10
type: pass@10
value: 0.27
verified: false
- name: pass@100
type: pass@100
value: 0.47
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Javascript)
metrics:
- name: pass@1
type: pass@1
value: 0.28
verified: false
- name: pass@10
type: pass@10
value: 0.51
verified: false
- name: pass@100
type: pass@100
value: 0.70
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.15
verified: false
- name: pass@10
type: pass@10
value: 0.26
verified: false
- name: pass@100
type: pass@100
value: 0.41
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL MBPP (Java)
metrics:
- name: pass@1
type: pass@1
value: 0.28
verified: false
- name: pass@10
type: pass@10
value: 0.44
verified: false
- name: pass@100
type: pass@100
value: 0.59
verified: false
- task:
type: text-generation
dataset:
type: loubnabnl/humaneval_infilling
name: HumanEval FIM (Python)
metrics:
- name: single_line
type: exact_match
value: 0.44
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval FIM (Java)
metrics:
- name: single_line
type: exact_match
value: 0.62
verified: false
- task:
type: text-generation
dataset:
type: nuprl/MultiPL-E
name: MultiPL HumanEval FIM (JavaScript)
metrics:
- name: single_line
type: exact_match
value: 0.60
verified: false
- task:
type: text-generation
dataset:
type: code_x_glue_ct_code_to_text
name: CodeXGLUE code-to-text (Python)
metrics:
- name: BLEU
type: bleu
value: 18.13
verified: false
---
# SantaCoder

Play with the model on the [SantaCoder Space Demo](https://huggingface.co/spaces/bigcode/santacoder-demo).
# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [License](#license)
6. [Citation](#citation)
# Model Summary
This is the same model as [SantaCoder](https://huggingface.co/bigcode/santacoder) but it can be loaded with transformers >=4.28.1 to use the GPTBigCode architecture.
We refer the reader to the [SantaCoder model page](https://huggingface.co/bigcode/santacoder) for full documentation about this model
- **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Project Website:** [bigcode-project.org](www.bigcode-project.org)
- **Paper:** [🎅SantaCoder: Don't reach for the stars!🌟](https://t.co/YV3pzUbYOr)
- **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
- **Languages:** Python, Java, and JavaScript
There are two versions (branches) of the model:
* `main`: Uses the `gpt_bigcode` model. [Requires the bigcode fork of transformers](https://github.com/bigcode-project/transformers).
* `main_custom`: Packaged with its modeling code. Requires `transformers>=4.27`.
Alternatively, it can run on older versions by setting the configuration parameter `activation_function = "gelu_pytorch_tanh"`.
# Use
## Intended use
The model was trained on GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
You should phrase commands like they occur in source code such as comments (e.g. `# the following function computes the sqrt`) or write a function signature and docstring and let the model complete the function body.
### Attribution & Other Requirements
The pretraining dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/santacoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
# Limitations
The model has been trained on source code in Python, Java, and JavaScript. The predominant language in source is English although other languages are also present. As such the model is capable to generate code snippets provided some context but the generated code is not guaranteed to work as intended. It can be inefficient, contain bugs or exploits.
# Training
## Model
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
- **Pretraining steps:** 600K
- **Pretraining tokens:** 236 billion
- **Precision:** float16
## Hardware
- **GPUs:** 96 Tesla V100
- **Training time:** 6.2 days
- **Total FLOPS:** 2.1 x 10e21
## Software
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
- **FP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
# License
The model is licenses under the CodeML Open RAIL-M v0.1 license. You can find the full license [here](https://huggingface.co/spaces/bigcode/license).
|
TheBloke/MPT-7B-Storywriter-GGML
|
TheBloke
| 2023-06-08T09:00:07Z | 23 | 56 |
transformers
|
[
"transformers",
"mpt",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:the_pile_books3",
"arxiv:2108.12409",
"arxiv:2205.14135",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T20:20:42Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
datasets:
- the_pile_books3
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# MPT-7B-Storywriter GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit models of [MosaicML's MPT-7B-Storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter).
This repo is the result of converting to GGML and quantising.
Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-GGML).
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML).
* [MPT-7B-Storywriter: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML).
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`mpt-7b-storywriter.ggmlv3.q4_0.bin` | q4_0 | 4bit | 4.21GB | 7.0GB | 4-bit. |
`mpt-7b-storywriter.ggmlv3.q4_1.bin` | q4_0 | 4bit | 4.63GB | 7.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`mpt-7b-storywriter.ggmlv3.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7.5GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`mpt-7b-storywriter.ggmlv3.q5_1.bin` | q5_1 | 5bit | 5.06GB | 7.5GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`mpt-7b-storywriter.ggmlv3.q8_0.bin` | q8_0 | 8bit | 7.58GB | 9.0GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
`mpt-7b-storywriter.ggmlv3.fp16.bin` | fp16 | 16bit | GB | GB | Full 16-bit. |
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# MPT-7B-StoryWriter-65k+
MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths.
It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-storywriter)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
Apache 2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048 and finetuned with a sequence length of 65536,
ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-storywriter',
trust_remote_code=True
)
config.update({"max_seq_len": 83968})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-storywriter',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Example Epilogue
The full text of the _The Great Gatsby_ (67873 tokens) was fed to the model, followed by the text "EPILOGUE"
**TheRealFitzgerald233**:
> ...
>
> So we beat on, boats against the current, borne back ceaselessly into the past
>
> EPILOGUE
**MPT-7B-StoryWriter**:
>I have lived so long in the shadow of this dream that I may never see the bright sunlight of reality. I may look up some morning and find that I am six feet under, and that Gatsby, who gave me the dream and everything else, will have vanished behind me.
>
>And so it was. I had hardly crossed the threshold of the old world when my eye caught the headlines of a newspaper, and I knew at once that my dream was over. I knew that Gatsby was dead.
>
>And I knew that day that I didn't love her.
>
>That night I took the last train for West Egg, and when I reached home I went directly to my attic and brought down all Gatsby's letters.
>
>I have sorted them by date and placed them in the order in which they were written. They are very few in number.
>
>They are dated between April of nineteen eighteen and January of nineteen nineteen, and the last one I have here is the one that told me he was dead.
>
>There is something in that last letter that frightens me. It is the sense of a finality, of a completeness. I never heard of him after that, and I was glad.
>
>But a few months ago I read somewhere that he had been seen in New York. He had become a great man.
>
>And I knew that he had not changed at all.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | **65536** |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 8 A100-80GBs for about 2 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-StoryWriter can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-StoryWriter was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Alex Trott and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
TheBloke/MPT-7B-Instruct-GGML
|
TheBloke
| 2023-06-08T08:59:52Z | 53 | 29 |
transformers
|
[
"transformers",
"mpt",
"Composer",
"MosaicML",
"llm-foundry",
"dataset:mosaicml/dolly_hhrlhf",
"arxiv:2205.14135",
"arxiv:2108.12409",
"arxiv:2010.04245",
"license:cc-by-sa-3.0",
"region:us"
] | null | 2023-05-18T16:41:36Z |
---
license: cc-by-sa-3.0
datasets:
- mosaicml/dolly_hhrlhf
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# MPT-7B-Instruct GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [MosaicML's MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct).
This repo is the result of converting to GGML and quantising.
Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-GGML).
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML).
* [MPT-7B-Storywriter: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML).
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`mpt7b-instruct.ggmlv3.q4_0.bin` | q4_0 | 4bit | 4.16GB | 6.2GB | 4-bit. |
`mpt7b-instruct.ggmlv3.q4_1.bin` | q4_0 | 4bit | 4.99GB | 7.2GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`mpt7b-instruct.ggmlv3.q5_0.bin` | q5_0 | 5bit | 4.57GB | 6.8GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`mpt7b-instruct.ggmlv3.q5_1.bin` | q5_1 | 5bit | 4.99GB | 7.2GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`mpt7b-instruct.ggmlv3.q8_0.bin` | q8_0 | 8bit | 7.48GB | 9.7GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
`mpt7b-instruct.ggmlv3.fp16.bin` | fp16 | 16bit | 13.30GB | 16GB | Full 16-bit. |
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# MPT-7B-Instruct
MPT-7B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-7B](https://huggingface.co/spaces/mosaicml/mpt-7b) on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
May 5, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
### Example Question/Instruction
**Longboi24**:
> What is a quoll?
**MPT-7B-Instruct**:
>A Quoll (pronounced “cool”) is one of Australia’s native carnivorous marsupial mammals, which are also known as macropods or wallabies in other parts around Asia and South America
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b-instruct',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b-instruct',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## PreTraining Data
For more details on the pretraining process, see [MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
TheBloke/MPT-7B-GGML
|
TheBloke
| 2023-06-08T08:59:36Z | 8 | 21 |
transformers
|
[
"transformers",
"mpt",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"region:us"
] | null | 2023-05-18T15:18:36Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# MPT-7B GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit GGML models of [MosaicML's MPT-7B](https://huggingface.co/mosaicml/mpt-7b).
This repo is the result of converting to GGML and quantising.
Please note that these MPT GGMLs are **not compatbile with llama.cpp**. Please see below for a list of tools known to work with these model files.
## Repositories available
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-GGML).
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML).
* [MPT-7B-Storywriter: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML).
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`mpt-7b.ggmlv3.q4_0.bin` | q4_0 | 4bit | 4.16GB | 6.2GB | 4-bit. |
`mpt-7b.ggmlv3.q4_1.bin` | q4_0 | 4bit | 4.99GB | 7.2GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`mpt-7b.ggmlv3.q5_0.bin` | q5_0 | 5bit | 4.57GB | 6.8GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`mpt-7b.ggmlv3.q5_1.bin` | q5_1 | 5bit | 4,99GB | 7.2GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`mpt-7b.ggmlv3.q8_0.bin` | q8_0 | 8bit | 7.48GB | 9.6GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
`mpt-7b.ggmlv3.fp16.bin` | fp16 | 16bit | 13.3GB | 15.5GB | Full 16-bit. |
## Compatibilty
These files are **not** compatible with llama.cpp.
Currently they can be used with:
* KoboldCpp, a powerful inference engine based on llama.cpp, with good UI: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
* The GPT4All-UI which uses ctransformers: [GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [rustformers' llm](https://github.com/rustformers/llm)
* The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
## Tutorial for using GPT4All-UI
* [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
* [Video tutorial, by GPT4All-UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-1btms90mc-GipE2ufuPkKY0QBrmF3LSA)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.attn_config['attn_impl'] = 'triton'
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
model.to(device='cuda:0')
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
config = transformers.AutoConfig.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
config.update({"max_seq_len": 4096})
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
ly Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-03-28}, % change this date
urldate = {2023-03-28} % change this date
}
```
|
warpmax/ppo-LunarLander-v2
|
warpmax
| 2023-06-08T08:52:31Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T08:52:07Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.11 +/- 21.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zeyneppktemm/flan-t5-base-imdb-text-classification
|
zeyneppktemm
| 2023-06-08T08:50:56Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-07T16:03:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: flan-t5-base-imdb-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-imdb-text-classification
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0767
- F1: 95.084
- Gen Len: 2.4976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zhangxuri/ewr12412
|
zhangxuri
| 2023-06-08T08:31:51Z | 0 | 0 |
asteroid
|
[
"asteroid",
"legal",
"token-classification",
"ae",
"dataset:tiiuae/falcon-refinedweb",
"dataset:asdfasdfasfasdfasdfasdf",
"license:creativeml-openrail-m",
"region:us"
] |
token-classification
| 2023-06-06T09:18:15Z |
---
license: creativeml-openrail-m
datasets:
- tiiuae/falcon-refinedweb
- asdfasdfasfasdfasdfasdf
language:
- ae
metrics:
- bertscore
library_name: asteroid
pipeline_tag: token-classification
tags:
- legal
libraries:
- pytorch
---
|
diallomama/wav2vec2_xlsr
|
diallomama
| 2023-06-08T08:04:41Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-05T23:38:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_xlsr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_xlsr
This model is a fine-tuned version of [diallomama/wav2vec2_xlsr](https://huggingface.co/diallomama/wav2vec2_xlsr) on the common_voice dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2204
- eval_wer: 0.9719
- eval_runtime: 923.0808
- eval_samples_per_second: 16.346
- eval_steps_per_second: 2.044
- epoch: 1.66
- step: 8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
Dylancruth/ppo-LunarLander-v2
|
Dylancruth
| 2023-06-08T07:59:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T07:59:14Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 173.89 +/- 47.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Uxinnn/ppo-LunarLander-v5
|
Uxinnn
| 2023-06-08T07:43:19Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T07:39:49Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -193.12 +/- 122.75
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00015
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Uxinnn/ppo-LunarLander-v5'
'batch_size': 512
'minibatch_size': 128}
```
|
sunil18p31a0101/Taxi-v3
|
sunil18p31a0101
| 2023-06-08T07:26:01Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T07:25:59Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sunil18p31a0101/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/hipoly_3dcg_v7-epoch-000012
|
casque
| 2023-06-08T07:21:29Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T07:20:39Z |
---
license: creativeml-openrail-m
---
|
steven-qi-zhao/bert-finetuned-ner
|
steven-qi-zhao
| 2023-06-08T07:10:23Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-08T06:58:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9340495867768595
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9424616410940627
- name: Accuracy
type: accuracy
value: 0.9866809913463237
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9340
- Recall: 0.9510
- F1: 0.9425
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0877 | 1.0 | 1756 | 0.0640 | 0.9132 | 0.9325 | 0.9227 | 0.9827 |
| 0.0336 | 2.0 | 3512 | 0.0615 | 0.9275 | 0.9480 | 0.9377 | 0.9861 |
| 0.0174 | 3.0 | 5268 | 0.0606 | 0.9340 | 0.9510 | 0.9425 | 0.9867 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Jinouga/harunosakurav3
|
Jinouga
| 2023-06-08T06:59:15Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-08T06:55:54Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### HarunoSakuraV3 Dreambooth model trained by Jinouga with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Jagannath/phishNet
|
Jagannath
| 2023-06-08T06:58:06Z | 67 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:50:18Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: phishNet
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# phishNet
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Jagannath/my_model
|
Jagannath
| 2023-06-08T06:54:19Z | 65 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T06:54:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: my_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_model
This model is a fine-tuned version of [./my_model](https://huggingface.co/./my_model) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
Falah/disney4real
|
Falah
| 2023-06-08T06:45:39Z | 29 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-08T06:33:25Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### disney4real Dreambooth model trained by Falah with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
soBeauty/xlm-roberta-base-Confusion-mlm-20230607
|
soBeauty
| 2023-06-08T06:32:12Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-07T14:28:44Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-Confusion-mlm-20230607
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-Confusion-mlm-20230607
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.8736
- Loss: 0.5270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| No log | 1.0 | 130 | 0.8677 | 0.6115 |
| No log | 2.0 | 260 | 0.9247 | 0.2752 |
| No log | 3.0 | 390 | 0.8571 | 0.6575 |
| 0.8615 | 4.0 | 520 | 0.8643 | 0.5735 |
| 0.8615 | 5.0 | 650 | 0.8911 | 0.3851 |
| 0.8615 | 6.0 | 780 | 0.8134 | 0.7165 |
| 0.8615 | 7.0 | 910 | 0.8413 | 0.6240 |
| 0.8129 | 8.0 | 1040 | 0.8861 | 0.4053 |
| 0.8129 | 9.0 | 1170 | 0.8606 | 0.5256 |
| 0.8129 | 10.0 | 1300 | 0.8776 | 0.5630 |
| 0.8129 | 11.0 | 1430 | 0.8784 | 0.5410 |
| 0.7179 | 12.0 | 1560 | 0.8807 | 0.5745 |
| 0.7179 | 13.0 | 1690 | 0.8889 | 0.4201 |
| 0.7179 | 14.0 | 1820 | 0.8785 | 0.4649 |
| 0.7179 | 15.0 | 1950 | 0.8859 | 0.4714 |
| 0.6857 | 16.0 | 2080 | 0.8453 | 0.5769 |
| 0.6857 | 17.0 | 2210 | 0.8407 | 0.5363 |
| 0.6857 | 18.0 | 2340 | 0.8724 | 0.5814 |
| 0.6857 | 19.0 | 2470 | 0.9098 | 0.3953 |
| 0.6107 | 20.0 | 2600 | 0.8736 | 0.5270 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
eunyounglee/pegasus-samsum
|
eunyounglee
| 2023-06-08T06:31:03Z | 95 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-08T05:36:46Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6909 | 0.54 | 500 | 1.4848 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
junwai7159/ppo-LunarLander-v2
|
junwai7159
| 2023-06-08T06:26:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T06:26:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 229.94 +/- 35.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YzZ-George/DeepSpeed-Chat-OPT-1.3B-3-3-3datasets
|
YzZ-George
| 2023-06-08T06:25:27Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2023-06-08T03:31:10Z |
---
license: apache-2.0
---
We train OPT-1.3B using three datasets: Dahoas/rm-static, Dahoas/full-hh-rlhf, and yitingxie/rlhf-reward-datasets.
Dahoas/synthetic-instruct-gptj-pairwise is not used because of the adsence of test dataset.
|
ziq/ingbetic
|
ziq
| 2023-06-08T06:24:25Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"onnx",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T15:55:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: ingbetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ingbetic
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6106
- eval_runtime: 23.2732
- eval_samples_per_second: 84.432
- eval_steps_per_second: 10.57
- epoch: 11.35
- step: 2000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
|
Pstman/my_music_gen-model
|
Pstman
| 2023-06-08T06:13:44Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-07T06:10:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_music_gen-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_music_gen-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 101 | 3.3829 |
| No log | 2.0 | 202 | 3.3278 |
| No log | 3.0 | 303 | 3.3157 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Hokkaiswimming/autotrain-k3-65025136019
|
Hokkaiswimming
| 2023-06-08T06:11:30Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:Hokkaiswimming/autotrain-data-k3",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-08T06:10:35Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- Hokkaiswimming/autotrain-data-k3
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.10121731414520015
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 65025136019
- CO2 Emissions (in grams): 0.1012
## Validation Metrics
- Loss: 0.202
- Accuracy: 0.895
- Precision: 0.857
- Recall: 1.000
- AUC: 1.000
- F1: 0.923
|
njuju/22
|
njuju
| 2023-06-08T06:05:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T06:03:47Z |
---
license: creativeml-openrail-m
---
|
saikatkumardey/LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
|
saikatkumardey
| 2023-06-08T05:39:26Z | 165 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-01T16:40:00Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
--- WORK IN PROGRESS ---
# LaMini-Flan-T5-77M-jerry_seinfeld_dialogues
This model is a fine-tuned version of [MBZUAI/LaMini-Flan-T5-77M](https://huggingface.co/MBZUAI/LaMini-Flan-T5-77M) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ku-nlp/gpt2-medium-japanese-char
|
ku-nlp
| 2023-06-08T05:34:26Z | 285 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-18T06:29:28Z |
---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- gpt2
datasets:
- wikipedia
- cc100
- oscar
widget:
- text: "<s>昨日私は京都で"
---
# Model Card for Japanese character-level GPT-2 Medium
## Model description
This is a Japanese character-level GPT-2 Medium (310M parameters) language model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model directly with a pipeline for text generation.
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='ku-nlp/gpt2-medium-japanese-char')
>>> set_seed(5)
>>> generator("<s>昨日私は京都で", max_length=30, do_sample=True, num_return_sequences=5)
[{'generated_text': '<s>昨日私は京都で仕事だったのです。そのときに訪れた京都の街の'},
{'generated_text': '<s>昨日私は京都で開かれた、「みんなで絵本の読み聞かせ会」に参'},
{'generated_text': '<s>昨日私は京都で行われましたコンペティションに参加してきまし'},
{'generated_text': '<s>昨日私は京都では雪が解けるの日経平均株価が下がるのみで今は'},
{'generated_text': '<s>昨日私は京都でこみっくトレジャー2を開催して見ましたが、そ'}]
```
You can also use this model to get the features of a given text.
## Vocabulary
A character-level vocabulary of size 6K is used. To be precise, rare characters may be split into bytes because byte-level byte-pair encoding (BPE) is used. The BPE tokenizer was trained on a small subset of the training data. Since the data were converted into a one-character-per-line format, merge operations never go beyond character boundaries.
Note that the tokenizer maps U+0020 to `[UNK]` because preprocessing eliminated whitespace characters (U+0020) from training data. Use U+3000 (Ideographic Space) instead.
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
The training took about 3 months (with two interruptions) with a single NVIDIA A100 80GB GPU.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 14
- gradient_accumulation_steps: 42
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-06
- weight_decay: 0.01
- lr_scheduler_type: linear
- max_grad_norm: 1.0
- max_steps: 500,000 (but terminated at 186,000 steps ~= 2.0 epochs)
- warmup_steps: 10,000
The eval loss was 1.411 while the eval accuracy was 0.6697. The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
|
SHENMU007/neunit_BASE_V7.6
|
SHENMU007
| 2023-06-08T05:30:33Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-08T02:06:24Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GautamR/detect_agri
|
GautamR
| 2023-06-08T05:24:51Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"mobilebert",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-31T05:30:37Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
library_name: transformers
---
|
Tsuroko/Agustinaa
|
Tsuroko
| 2023-06-08T05:14:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T05:14:37Z |
---
license: creativeml-openrail-m
---
|
97jmlr/sd-class-butterflies-32
|
97jmlr
| 2023-06-08T05:14:32Z | 30 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2023-06-08T05:14:21Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('97jmlr/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
hfl/chinese-llama-lora-33b
|
hfl
| 2023-06-08T05:13:56Z | 0 | 8 | null |
[
"zh",
"license:apache-2.0",
"region:us"
] | null | 2023-06-07T09:16:09Z |
---
license: apache-2.0
language:
- zh
---
# Chinese-LLaMA-LoRA-33B
This repo contains the tokenizer, Chinese-LLaMA LoRA weights and configs for [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
Instructions for using the weights can be found at https://github.com/ymcui/Chinese-LLaMA-Alpaca.
|
Tsuroko/Agustina
|
Tsuroko
| 2023-06-08T05:13:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T04:59:07Z |
---
license: creativeml-openrail-m
---
|
CS2024/1111
|
CS2024
| 2023-06-08T05:10:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-08T05:08:33Z |
Russland beendet den Krieg in der Ukraine, Putin bläst zum Rückzug seiner Truppen, aus seinem Horn sprüht Blut, Putin hat aufgeblasene Backen
|
dennischui/taxi_v3
|
dennischui
| 2023-06-08T04:49:07Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T04:33:41Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dennischui/taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abbymark/Reinforce-Pixelcopter-PLE-v0
|
abbymark
| 2023-06-08T04:30:40Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:04:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 31.40 +/- 25.99
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
openaccess-ai-collective/wizard-mega-13b
|
openaccess-ai-collective
| 2023-06-08T04:20:46Z | 2,680 | 106 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-14T21:48:43Z |
---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Wizard Mega 13B has been updated and is now Manticore 13B
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
Manticore is available at https://huggingface.co/openaccess-ai-collective/manticore-13b and fixes many issues with Wizard Mega and adds new datasets to the training.
Wizard Mega is a Llama 13B model fine-tuned on the [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered), and [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) datasets. These particular datasets have all been filtered to remove responses where the model responds with "As an AI language model...", etc or when the model refuses to respond.
# Demo
Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality.
- https://huggingface.co/spaces/openaccess-ai-collective/wizard-mega-ggml
## Release (Epoch Two)
The Wizard Mega 13B SFT model is being released after two epochs as the eval loss increased during the 3rd (final planned epoch). Because of this, we have preliminarily decided to use the epoch 2 checkpoint as the final release candidate. https://wandb.ai/wing-lian/vicuna-13b/runs/5uebgm49
## Build
Wizard Mega was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/wizard-mega-13b/tree/main/configs).
## Bias, Risks, and Limitations
Wizard Mega has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Wizard Mega was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
## Examples
````
### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization.
### Assistant:
```python
def fib(n):
if n < 2:
return n
elif n in cache:
return cache[n]
else:
a, b = 0, 1
for i in range(2, n+1):
cache[i] = a
a, b = b, a + b
return cache[n]
def first_n(n):
fib_list = [fib(i) for i in range(n+1)]
return fib_list[:n]
```
This function uses memoization to store the values of previous Fibonacci numbers in a cache. This way, if the same number is requested again, it can be returned immediately without recalculating it.
The `first_n` function takes an integer `n` as input, and calculates the first n numbers of the Fibonacci sequence using memoization. It returns a list of those numbers.
````
```
### Instruction: Finish the joke, a mechanic and a car salesman walk into a bar...
### Assistant: ...and the bartender says "Hey guys, what can I get for you?" The mechanic replies, "I'll have a beer, but make it a quick one. I have to fix this guy's car before he finds out I
fiddled with his brakes." The salesman quips, "And I'll have a martini, shaken not stirred. After all, I have to sell this guy a car that doesn't break down on him within the first year of ownership."
```
|
Vieraaa/calya
|
Vieraaa
| 2023-06-08T04:18:18Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-07T17:23:18Z |
---
license: creativeml-openrail-m
---
|
RadwaH/CustomDiffusionAgnes2
|
RadwaH
| 2023-06-08T04:06:02Z | 6 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-08T00:09:53Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
instance_prompt: photo of a <new1> girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - RadwaH/CustomDiffusionAgnes2
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2. The weights were trained on photo of a <new1> girl using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.


For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
wjungvm/distilbert-base-uncased-finetuned-emotion
|
wjungvm
| 2023-06-08T04:03:46Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T03:55:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9245837586314949
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8124 | 1.0 | 250 | 0.3055 | 0.91 | 0.9079 |
| 0.2446 | 2.0 | 500 | 0.2161 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wiorz/legal_bert_sm_cv_defined_summarized_4
|
wiorz
| 2023-06-08T03:52:24Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T03:49:38Z |
---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_sm_cv_defined_summarized_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_sm_cv_defined_summarized_4
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7595
- Accuracy: 0.811
- Precision: 0.5385
- Recall: 0.2154
- F1: 0.3077
- D-index: 1.5216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 250 | 0.4882 | 0.805 | 0.0 | 0.0 | 0.0 | 1.4370 |
| 0.5662 | 2.0 | 500 | 0.4681 | 0.805 | 0.0 | 0.0 | 0.0 | 1.4370 |
| 0.5662 | 3.0 | 750 | 0.4649 | 0.807 | 0.625 | 0.0256 | 0.0493 | 1.4491 |
| 0.4397 | 4.0 | 1000 | 0.4675 | 0.819 | 0.7692 | 0.1026 | 0.1810 | 1.4931 |
| 0.4397 | 5.0 | 1250 | 0.5234 | 0.816 | 0.7391 | 0.0872 | 0.1560 | 1.4836 |
| 0.3492 | 6.0 | 1500 | 0.5137 | 0.825 | 0.6562 | 0.2154 | 0.3243 | 1.5406 |
| 0.3492 | 7.0 | 1750 | 0.5490 | 0.81 | 0.5490 | 0.1436 | 0.2276 | 1.4952 |
| 0.2409 | 8.0 | 2000 | 0.6896 | 0.82 | 0.5882 | 0.2564 | 0.3571 | 1.5478 |
| 0.2409 | 9.0 | 2250 | 0.7600 | 0.808 | 0.5155 | 0.2564 | 0.3425 | 1.5316 |
| 0.1506 | 10.0 | 2500 | 1.0232 | 0.813 | 0.5714 | 0.1641 | 0.2550 | 1.5065 |
| 0.1506 | 11.0 | 2750 | 1.0855 | 0.823 | 0.6731 | 0.1795 | 0.2834 | 1.5255 |
| 0.0851 | 12.0 | 3000 | 1.1956 | 0.797 | 0.4655 | 0.2769 | 0.3473 | 1.5236 |
| 0.0851 | 13.0 | 3250 | 1.2379 | 0.808 | 0.5190 | 0.2103 | 0.2993 | 1.5157 |
| 0.0538 | 14.0 | 3500 | 1.4613 | 0.807 | 0.5143 | 0.1846 | 0.2717 | 1.5055 |
| 0.0538 | 15.0 | 3750 | 1.4960 | 0.815 | 0.5658 | 0.2205 | 0.3173 | 1.5288 |
| 0.0334 | 16.0 | 4000 | 1.6423 | 0.806 | 0.5067 | 0.1949 | 0.2815 | 1.5076 |
| 0.0334 | 17.0 | 4250 | 1.6386 | 0.804 | 0.4958 | 0.3026 | 0.3758 | 1.5419 |
| 0.0364 | 18.0 | 4500 | 1.6520 | 0.797 | 0.45 | 0.1846 | 0.2618 | 1.4917 |
| 0.0364 | 19.0 | 4750 | 1.6842 | 0.804 | 0.4953 | 0.2718 | 0.3510 | 1.5314 |
| 0.0167 | 20.0 | 5000 | 1.7595 | 0.811 | 0.5385 | 0.2154 | 0.3077 | 1.5216 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
SEVUNX/JOY_DIFFUSION
|
SEVUNX
| 2023-06-08T03:46:53Z | 0 | 0 | null |
[
"art",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-02-28T13:37:36Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- art
- stable-diffusion
---
<center>
<b><i><font size="6"><p style="color:red">JOY DIFFUSION CHECKPOINT MERGE</p></font></i></b>
<img src="https://64.media.tumblr.com/3c2c6f40b41877ef923150a52705a14a/tumblr_mlnzf9BvWN1qg6rkio1_500.gifv" alt="">
</center>
|
Yaxin1992/llama-33b-qlora-4000
|
Yaxin1992
| 2023-06-08T03:38:09Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2023-06-07T22:08:03Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: llama-33b-qlora-4000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-33b-qlora-4000
This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
DoesNoPro/DialoGPT-small-RaidenG2
|
DoesNoPro
| 2023-06-08T03:30:47Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T03:23:18Z |
---
tags:
- conversational
---
|
nickmuchi/setfit-model-mpnet-financial-classification
|
nickmuchi
| 2023-06-08T03:21:21Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-08T03:21:08Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nickmuchi/setfit-model-mpnet-financial-classification
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nickmuchi/setfit-model-mpnet-financial-classification")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
jujbob/bert-finetuned-ner-ime
|
jujbob
| 2023-06-08T03:10:48Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-07T02:27:01Z |
---
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-ime
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.998195331607817
- name: Recall
type: recall
value: 0.9982190349544073
- name: F1
type: f1
value: 0.9982071831403979
- name: Accuracy
type: accuracy
value: 0.9979751132733664
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-ime
This model is a fine-tuned version of [snunlp/KR-BERT-char16424](https://huggingface.co/snunlp/KR-BERT-char16424) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Precision: 0.9982
- Recall: 0.9982
- F1: 0.9982
- Accuracy: 0.9980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0378 | 1.0 | 1756 | 0.0290 | 0.9934 | 0.9939 | 0.9936 | 0.9920 |
| 0.0214 | 2.0 | 3512 | 0.0138 | 0.9969 | 0.9970 | 0.9970 | 0.9965 |
| 0.0151 | 3.0 | 5268 | 0.0076 | 0.9982 | 0.9982 | 0.9982 | 0.9980 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
nickmuchi/setfit-model-financial-classification
|
nickmuchi
| 2023-06-08T03:06:57Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-08T03:06:45Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nickmuchi/setfit-model-financial-classification
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nickmuchi/setfit-model-financial-classification")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
chereddy/Taxi-v3-attempt1
|
chereddy
| 2023-06-08T02:52:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T02:52:28Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-attempt1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.77
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="chereddy/Taxi-v3-attempt1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jangmin/whisper-small-ko-normalized-1273h
|
jangmin
| 2023-06-08T02:46:40Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-01T10:00:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ko-normalized-1273h
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ko-normalized-1273h
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1426
- Wer: 0.0671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0726 | 1.0 | 6581 | 0.1490 | 0.0721 |
| 0.0368 | 2.0 | 13162 | 0.1405 | 0.0686 |
| 0.0317 | 3.0 | 19743 | 0.1426 | 0.0671 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
- ## Evaluation Result for the dataset `google/fleurs`
The trained model is evaluated on the `test` split of subset `ko_kr` from the dataset `google/fleurs`.
Please note that the model was not trained on the `train` split from the dataset.
|model|Wer|
|---|---|
|openai/whisper|0.2826|
|this model|0.2679|
|
wikingz/mayuyokotarealis
|
wikingz
| 2023-06-08T01:21:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-08T01:17:37Z |
---
license: creativeml-openrail-m
---
|
luffycodes/tutorbot-spock-bio-llama-diff
|
luffycodes
| 2023-06-08T01:19:14Z | 10 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"biology",
"chatgpt",
"vicuna",
"tutorbot",
"conversation",
"dataset:luffycodes/Tutorbot-Spock-Bio-Dataset",
"arxiv:2305.13272",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-31T17:36:07Z |
---
datasets:
- luffycodes/Tutorbot-Spock-Bio-Dataset
license: apache-2.0
tags:
- biology
- chatgpt
- llama
- vicuna
- tutorbot
- conversation
---
**NOTE: This "diff model" cannot be used directly.**
Users have to apply it on top of the original LLaMA weights to get actual Spock weights.
Please find the instructions here: https://github.com/luffycodes/Tutorbot-Spock-Bio.
<br>
<br>
# Spock Model Card
## Github details
Please checkout the repo: https://github.com/luffycodes/Tutorbot-Spock-Bio.
## Model details
**Model type:**
Spock is an open-source educational tutoring chatbot trained by fine-tuning LLaMA and Vicuna model on synthetic student-tutorbot conversations generated using a specialized prompt.
**Model date:**
Spock was trained between April 2023 and May 2023.
**Organizations developing the model:**
The Spock team with members from Rice University and OpenStax.
## Training dataset
700 conversations generated using a [specialized prompt](https://github.com/luffycodes/Tutorbot-Spock-Bio/blob/main/prompts/conversation_gen/v3.txt) from GPT-4.
Dataset link: https://huggingface.co/datasets/luffycodes/Tutorbot-Spock-Bio-Dataset
**Paper or resources for more information:**
https://arxiv.org/abs/2305.13272
**Code or resources for more information:**
https://github.com/luffycodes/Tutorbot-Spock-Bio
**License:**
Apache License 2.0
**Where to send questions or comments about the model:**
Shashank Sonkar (ss164@rice.edu)
If you use this work, please cite:
CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles
https://arxiv.org/abs/2305.13272
```
@misc{sonkar2023class,
title={CLASS Meet SPOCK: An Education Tutoring Chatbot based on Learning Science Principles},
author={Shashank Sonkar and Lucy Liu and Debshila Basu Mallick and Richard G. Baraniuk},
year={2023},
eprint={2305.13272},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
rkumar38/my_ssl
|
rkumar38
| 2023-06-08T01:18:29Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-08T00:56:17Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rkumar38/my_ssl
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rkumar38/my_ssl
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8614
- Train Accuracy: 1.0
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Epoch |
|:----------:|:--------------:|:-----:|
| 1.5734 | 1.0 | 0 |
| 1.4597 | 1.0 | 1 |
| 1.3456 | 1.0 | 2 |
| 1.2322 | 1.0 | 3 |
| 1.1458 | 1.0 | 4 |
| 1.0713 | 1.0 | 5 |
| 0.9932 | 1.0 | 6 |
| 0.9456 | 1.0 | 7 |
| 0.9033 | 1.0 | 8 |
| 0.8614 | 1.0 | 9 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jvilaseca/ppo-Huggy
|
jvilaseca
| 2023-06-08T01:17:15Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-08T01:17:08Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: jvilaseca/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
raghvendramall/esm2_t30_150M_UR50D-finetuned-localization
|
raghvendramall
| 2023-06-08T00:55:47Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"esm",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T10:18:15Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: esm2_t30_150M_UR50D-finetuned-localization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t30_150M_UR50D-finetuned-localization
This model is a fine-tuned version of [facebook/esm2_t30_150M_UR50D](https://huggingface.co/facebook/esm2_t30_150M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8191
- F1: 0.7240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4436 | 1.0 | 7778 | 0.4439 | 0.7285 |
| 0.374 | 2.0 | 15556 | 0.4806 | 0.7428 |
| 0.2786 | 3.0 | 23334 | 0.8067 | 0.7243 |
| 0.1524 | 4.0 | 31112 | 1.3323 | 0.7261 |
| 0.1035 | 5.0 | 38890 | 1.3754 | 0.7227 |
| 0.0532 | 6.0 | 46668 | 1.4962 | 0.7165 |
| 0.0379 | 7.0 | 54446 | 1.5434 | 0.7173 |
| 0.0319 | 8.0 | 62224 | 1.6561 | 0.7201 |
| 0.0181 | 9.0 | 70002 | 1.7344 | 0.7259 |
| 0.0056 | 10.0 | 77780 | 1.8191 | 0.7240 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
wiorz/bert_sm_cv_summarized_4
|
wiorz
| 2023-06-08T00:51:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T00:47:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_sm_cv_summarized_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_sm_cv_summarized_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9996
- Accuracy: 0.802
- Precision: 0.48
- Recall: 0.1846
- F1: 0.2667
- D-index: 1.4986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 250 | 0.4713 | 0.812 | 0.5814 | 0.1282 | 0.2101 | 1.4926 |
| 0.5708 | 2.0 | 500 | 0.4584 | 0.811 | 0.5625 | 0.1385 | 0.2222 | 1.4948 |
| 0.5708 | 3.0 | 750 | 0.4557 | 0.813 | 0.5769 | 0.1538 | 0.2429 | 1.5029 |
| 0.4231 | 4.0 | 1000 | 0.4700 | 0.81 | 0.5316 | 0.2154 | 0.3066 | 1.5202 |
| 0.4231 | 5.0 | 1250 | 0.4979 | 0.812 | 0.5385 | 0.2513 | 0.3427 | 1.5353 |
| 0.3292 | 6.0 | 1500 | 0.5337 | 0.816 | 0.5647 | 0.2462 | 0.3429 | 1.5389 |
| 0.3292 | 7.0 | 1750 | 0.6282 | 0.797 | 0.4615 | 0.2462 | 0.3211 | 1.5131 |
| 0.2218 | 8.0 | 2000 | 0.7182 | 0.805 | 0.5 | 0.2513 | 0.3345 | 1.5257 |
| 0.2218 | 9.0 | 2250 | 0.8488 | 0.809 | 0.5208 | 0.2564 | 0.3436 | 1.5329 |
| 0.1478 | 10.0 | 2500 | 0.9830 | 0.809 | 0.5294 | 0.1846 | 0.2738 | 1.5082 |
| 0.1478 | 11.0 | 2750 | 1.0302 | 0.79 | 0.4419 | 0.2923 | 0.3519 | 1.5193 |
| 0.077 | 12.0 | 3000 | 1.0467 | 0.795 | 0.4658 | 0.3487 | 0.3988 | 1.5452 |
| 0.077 | 13.0 | 3250 | 1.2609 | 0.803 | 0.4931 | 0.3641 | 0.4189 | 1.5612 |
| 0.0328 | 14.0 | 3500 | 1.4127 | 0.806 | 0.5044 | 0.2923 | 0.3701 | 1.5411 |
| 0.0328 | 15.0 | 3750 | 1.6626 | 0.802 | 0.4835 | 0.2256 | 0.3077 | 1.5128 |
| 0.0189 | 16.0 | 4000 | 1.7062 | 0.81 | 0.5362 | 0.1897 | 0.2803 | 1.5113 |
| 0.0189 | 17.0 | 4250 | 1.9225 | 0.809 | 0.54 | 0.1385 | 0.2204 | 1.4921 |
| 0.0214 | 18.0 | 4500 | 1.8228 | 0.81 | 0.5269 | 0.2513 | 0.3403 | 1.5325 |
| 0.0214 | 19.0 | 4750 | 1.9544 | 0.789 | 0.4355 | 0.2769 | 0.3386 | 1.5127 |
| 0.0184 | 20.0 | 5000 | 1.9996 | 0.802 | 0.48 | 0.1846 | 0.2667 | 1.4986 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
yukismd/JapaneseQuizChatbot_v1
|
yukismd
| 2023-06-08T00:48:50Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"ja",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-08T00:25:01Z |
---
language:
- ja
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [cyberagent/open-calm-7b](https://huggingface.co/cyberagent/open-calm-7b)
- Training Data: [AI王 〜クイズAI日本一決定戦〜](https://sites.google.com/view/project-aio/dataset) ([Transformed dataset for training by H2O LLM Studio](https://h2oai-jpn-public.s3.amazonaws.com/sample-data/llm/JapaneseQuiz.csv))
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.28.1
pip install accelerate==0.18.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="yukismd/JapaneseQuizChatbot_v1",
torch_dtype=torch.float16,
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"日本で一番高い山は富士山ですが、二番目に高い山は?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("日本で一番高い山は富士山ですが、二番目に高い山は?")["prompt_text"])
```
```bash
<|prompt|>日本で一番高い山は富士山ですが、二番目に高い山は?<|endoftext|><|answer|>
```
Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yukismd/JapaneseQuizChatbot_v1",
use_fast=True,
padding_side="left"
)
model = AutoModelForCausalLM.from_pretrained(
"yukismd/JapaneseQuizChatbot_v1",
torch_dtype=torch.float16,
device_map={"": "cuda:0"}
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"日本で一番高い山は富士山ですが、二番目に高い山は?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yukismd/JapaneseQuizChatbot_v1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>日本で一番高い山は富士山ですが、二番目に高い山は?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(model_name)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
**inputs,
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=2,
temperature=float(0.3),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(52224, 4096)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(dense): Linear(in_features=4096, out_features=4096, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=4096, out_features=52224, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Model Validation
Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
```bash
CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=yukismd/JapaneseQuizChatbot_v1 --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log
```
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Ivydata/whisper-base-japanese
|
Ivydata
| 2023-06-08T00:17:50Z | 207 | 2 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"ja",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-17T04:36:41Z |
---
license: apache-2.0
datasets:
- common_voice
language:
- ja
tags:
- audio
---
# Fine-tuned Japanese Whisper model for speech recognition using whisper-base
Fine-tuned [openai/whisper-base](https://huggingface.co/openai/whisper-base) on Japanese using [Common Voice](https://commonvoice.mozilla.org/ja/datasets), [JVS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly as follows.
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
from datasets import load_dataset
import librosa
import torch
LANG_ID = "ja"
MODEL_ID = "Ivydata/whisper-base-japanese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = WhisperProcessor.from_pretrained("openai/whisper-base")
model = WhisperForConditionalGeneration.from_pretrained(MODEL_ID)
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(
language="ja", task="transcribe"
)
model.config.suppress_tokens = []
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
batch["sampling_rate"] = sampling_rate
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
sample = test_dataset[0]
input_features = processor(sample["speech"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
# ['<|startoftranscript|><|ja|><|transcribe|><|notimestamps|>木村さんに電話を貸してもらいました。<|endoftext|>']
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
# ['木村さんに電話を貸してもらいました。']
```
## Test Result
In the table below I report the Character Error Rate (CER) of the model tested on [TEDxJP-10K](https://github.com/laboroai/TEDxJP-10K) dataset.
| Model | CER |
| ------------- | ------------- |
| Ivydata/whisper-small-japanese | **27.25%** |
| Ivydata/wav2vec2-large-xlsr-53-japanese | **27.87%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-japanese | 34.18% |
|
abbymark/Reinforce-CartPole-v1
|
abbymark
| 2023-06-08T00:15:37Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-08T00:15:27Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 480.60 +/- 58.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.