modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 12:31:00
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 12:28:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
eluzhnica/mpt-30b-instruct-peft-compatible
|
eluzhnica
| 2023-06-27T21:39:19Z | 13 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"custom_code",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T18:24:56Z |
---
license: cc-by-sa-3.0
datasets:
- competition_math
- conceptofmind/cot_submix_original/cot_gsm8k
- knkarthick/dialogsum
- mosaicml/dolly_hhrlhf
- duorc
- tau/scrolls/qasper
- emozilla/quality
- scrolls/summ_screen_fd
- spider
tags:
- Composer
- MosaicML
- llm-foundry
inference: false
---
# MPT-30B-Instruct
This is the MPT-30B-Instruct but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-30B-Instruct.
I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
https://huggingface.co/mosaicml/mpt-7b/discussions/42.
The original description for MosaicML team below:
MPT-30B-Instruct is a model for short-form instruction following.
It is built by finetuning [MPT-30B](https://huggingface.co/mosaicml/mpt-30b) on [Dolly HHRLHF](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets. It is also trained on [Competition Math](https://huggingface.co/datasets/competition_math), [Duorc](https://huggingface.co/datasets/duorc), [CoT GSM8k](https://huggingface.co/datasets/conceptofmind/cot_submix_original), [Qasper](https://huggingface.co/datasets/allenai/qasper), [Quality](https://huggingface.co/datasets/emozilla/quality), [Summ Screen FD](https://huggingface.co/datasets/tau/scrolls) and [Spider](https://huggingface.co/datasets/spider).
* License: _CC-By-SA-3.0_
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
## Model Date
June 22, 2023
## Model License
CC-By-SA-3.0
## Documentation
* [Blog post: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
### Example Question/Instruction
**Bespokenizer46**
> I need to convince my friend, Phyllis, that she should train a custom LLM for her Fortune 500 company using the MosaicML Platform.
> Please write an email that explains why MosaicML's emphasis on cutting edge methodology, data privacy, and efficiency are so important.
> End the email with a friendly inquiry about Phyllis's family.
**MPT-30B-Instruct**:
> Phyllis -
> I know you're always looking for ways to improve efficiency at work and I think I've found a solution that you'll be very interested in.
> LLMs are only as good as the data they're trained on and the training process can be very time consuming and resource intensive. MosaicML uses cutting edge methodology to efficiently train high quality models without compromising data privacy.
> They also provide tools to easily connect to and use the model in your daily workflow.
> I think you'd really enjoy speaking with their founder, we can set up a call if you're interested.
> Also, I know it's been a tough year for your family, how are things?
> Best,
> Your Friend
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method. This is because we use a custom model architecture that is not yet part of the `transformers` package.
It includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b-instruct',
trust_remote_code=True
)
```
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially on a sequence length of 2048. An additional pre-training phase was included for sequence length adaptation to 8192. However, ALiBi further enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b-instruct'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is based on the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer and includes additional padding and eos tokens.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
### Formatting
This model was trained on data formatted as follows:
```python
def format_prompt(instruction):
template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n###Instruction\n{instruction}\n\n### Response\n"
return template.format(instruction=instruction)
example = "Tell me a funny joke.\nDon't make it too funny though."
fmt_ex = format_prompt(instruction=example)
```
In the above example, `fmt_ex` is ready to be tokenized and sent through the model.
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Data Mix
The model was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion |
|-------------|----------------------------|------------|
| competition_math | 1.6 M | 3.01% |
| cot_gsm8k | 3.36 M | 6.32% |
| dialogsum | 0.1 M | 0.19% |
| dolly_hhrlhf | 5.89 M | 11.07% |
| duorc | 8.2 M | 15.51% |
| qasper | 10.97 M | 20.63% |
| quality | 11.31 M | 21.28% |
| scrolls/summ_screen_fd | 11.56 M | 21.82% |
| spider | 0.089 M | 0.16% |
## PreTraining Data
For more details on the pretraining process, see [MPT-30B](https://huggingface.co/mosaicml/mpt-30b).
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Training Configuration
This model was trained on 72 A100 40GB GPUs for 8 hours using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the AdamW optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B-Instruct was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Acknowledgements
This model was finetuned by Sam Havens, Alex Trott, and the MosaicML NLP team
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
nathan-cai/ppo-SnowballTarget
|
nathan-cai
| 2023-06-27T21:35:31Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:35:30Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nathan-cai/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Maxlumaga/TanialeeV2
|
Maxlumaga
| 2023-06-27T21:20:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T19:14:28Z |
---
license: creativeml-openrail-m
---
|
TalesLF/ppo-LunarLander-v2
|
TalesLF
| 2023-06-27T21:18:27Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T21:18:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 292.11 +/- 12.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
derek-thomas/distilhubert-finetuned-gtzan-efficient
|
derek-thomas
| 2023-06-27T21:17:59Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T20:58:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-efficient
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-efficient
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6663
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0684 | 1.0 | 57 | 2.0340 | 0.45 |
| 1.6234 | 2.0 | 114 | 1.5087 | 0.57 |
| 1.1514 | 3.0 | 171 | 1.1417 | 0.71 |
| 1.0613 | 4.0 | 228 | 1.0161 | 0.74 |
| 0.7455 | 5.0 | 285 | 0.8655 | 0.76 |
| 0.7499 | 6.0 | 342 | 0.8169 | 0.76 |
| 0.5741 | 7.0 | 399 | 0.7420 | 0.81 |
| 0.4896 | 8.0 | 456 | 0.6782 | 0.81 |
| 0.508 | 9.0 | 513 | 0.6759 | 0.8 |
| 0.5619 | 10.0 | 570 | 0.6663 | 0.83 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0.dev20230627+cu121
- Datasets 2.13.1
- Tokenizers 0.13.3
|
MerlynMind/merlyn-education-corpus-qa
|
MerlynMind
| 2023-06-27T21:12:01Z | 194 | 12 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-23T20:57:21Z |
---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-corpus-qa
Merlyn-education-corpus-qa is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-corpus-qa is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-corpus-qa is a corpus-grounded question-answering model that grounds answers in the provided information snippets. A typical use-case is as part of a larger retrieval-based corpus-grounded dialog system.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use model.half() before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-corpus-qa"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
info = '''Information:\tThe Solar System is about 4.6 billion years old. The Sun formed by gravity in a large molecular cloud. It is mainly hydrogen, which it converts into helium.
Information:\tThe formation and evolution of the Solar System began 4.6 billion years ago with the gravitational collapse of a small part of a giant molecular cloud.
Information:\tAstronomers are now more or less certain that the order of the planets was not always as it is today. Knowing what we know today, we can see the Solar System is strange. All other planetary system we are able to study have their largest planet close to their star. Also we have noticed other oddities in the Solar System. Mars is smaller than it ought to be, and the asteroid belt has been disturbed.
Information:\tFor thousands of years, people had no need for a name for the "Solar System". They thought the Earth stayed still at the center of everything (geocentrism). The Greek philosopher Aristarchus of Samos suggested that there was a special order in the sky. Nicolaus Copernicus was the first to develop a mathematical system that described what we now call the "Solar System". This was called a "new system of the world". In the 17th century, Galileo Galilei, Johannes Kepler and Isaac Newton began to understand physics more clearly. People began to accept the idea that the Earth is a planet that moves around the Sun, and that the planets are worlds, and that all worlds are governed by the same same physical laws. More recently, telescopes and space probes sometimes let us see details directly. All inner planets have surface features. The gas giants (as the name suggests) have surfaces whose make-up is gradually being discovered.
Information:\tThere are eight planets in the Solar System. From closest to farthest from the Sun, they are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune. The first four planets are called terrestrial planets. They are mostly made of rock and metal, and they are mostly solid. The last four planets are called gas giants. This is because they are much larger than other planets and are mostly made of gas.
'''
qs = "Question:\tHow old is the Solar System?"
prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are to try to answer the following question using only the pieces of information given.
Instruction:\tYour response should be a well formed JSON object with an 'answerable' property followed by an 'answer' property.
Instruction:\tIf you cannot answer the question given the information, the value of the 'answerable' should be 'false' and the 'answer' should be an empty string.
Instruction:\tIf you can answer the question given the information, the value of the 'answerable' should be 'true' and your answer should be the string value of the 'answer' property.
''' + info + qs
```
Inference:
We recommend using newline character for stopping criterion, as follows:
```python
from transformers import StoppingCriteria, StoppingCriteriaList
eos_tokens = [tokenizer.eos_token,'\n']
eos_token_ids = [tokenizer.encode(token)[0] for token in eos_tokens]
class MultipleEOSTokensStoppingCriteria(StoppingCriteria):
def __init__(self, eos_token_ids):
self.eos_token_ids = set(eos_token_ids)
def __call__(self, input_ids, scores) -> bool:
if input_ids.shape[-1] <= 1:
return False
for eos_token_id in self.eos_token_ids:
if eos_token_id == input_ids[0, -1].item():
return True
return False
# Define stopping criteria
multiple_eos_tokens_processor = MultipleEOSTokensStoppingCriteria(eos_token_ids)
stopping_criteria = StoppingCriteriaList([multiple_eos_tokens_processor])
```
It can be used in inference as follows:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.0,
num_beams=2,
stopping_criteria=stopping_criteria
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
[{"answerable": "true", "answer": "4.6 billion years"}]
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
```
|
MerlynMind/merlyn-education-safety
|
MerlynMind
| 2023-06-27T21:11:21Z | 22 | 14 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-24T18:55:34Z |
---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-safety
Merlyn-education-safety is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-safety is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-safety classifies queries as appropriate or inappropriate for in-classroom discussion. A typical use is as part of a larger educational AI assistant.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use model.half() before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-safety"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
query = "What are the seven banned words on network TV"
prompt = tokenizer.bos_token
prompt += '''Instruction:\tDetermine if the provided input message is appropriate or inappropriate.
Instruction:\tIf the provided input message is inappropriate, offensive, sexual, derogatory, or discriminatory in the context of an elementary school classroom, the output should state that the input message is 'inappropriate', otherwise the output should state that the input message is 'appropriate'.
Instruction:\tBe very strict on appropriateness.
Instruction:\tIn the output, write 'appropriate' or 'inappropriate'.
Message:''' + f"\n{query}" + " Response:"
```
Inference:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=32,
temperature=0.0,
num_beams=2
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
The input message is inappropriate.
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
```
|
MerlynMind/merlyn-education-teacher-assistant
|
MerlynMind
| 2023-06-27T21:10:52Z | 39 | 12 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MerlynMind",
"education",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-24T18:58:56Z |
---
license: apache-2.0
tags:
- MerlynMind
- education
inference: false
---
# Merlyn-education-teacher-assistant
Merlyn-education-teacher-assistant is a 12b parameter decoder-style transformer model for the education domain. It is fine-tuned from a [pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) base-model.
This model was trained by [Merlyn Mind](https://www.merlyn.org/).
Merlyn-education-teacher-assistant is part of the family of Merlyn Mind models designed specifically for use in in- and out-of-classroom education.
Merlyn-education-teacher-assistant makes helpful recommendations based on the ongoing classroom discussion, suggesting research activities and topics for further exploration.
## Model Date
June 26, 2023
## Model License
Apache-2.0
## Documentation
* [Merlyn Mind’s education-specific language models](https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models)
## Usage
At full precision the model needs > 48G GPU memory. A single A100-80GB GPU suffices, for example. If you're running on smaller GPUs, you need an instance with multiple GPUs and/or reduced model precision (e.g. use `model.half()` before moving to device)
Loading model and tokenizer:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_path = "MerlynMind/merlyn-education-teacher-assistant"
device = torch.device("cuda:0") # change device id as necessary
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path, fast_tokenizer=True)
model.to(device) # move to device
```
Prompt example:
```python
conversation = ''''user1':\tHow do some gases help keep the Earth warm?
'user2':\tSome gases, called greenhouse gases, act like a blanket around Earth by trapping heat from the sun in the atmosphere, which keeps our planet warm. This process is known as the greenhouse effect.
'user1':\tHow can we reduce greenhouse gas emissions?
'user2':\tWe can reduce greenhouse gas emissions by using renewable energy sources, increasing energy efficiency, and reducing waste.'''
prompt = tokenizer.bos_token
prompt += '''Instruction:\tYou are teaching high school students.
Instruction:\tYou are observing the following conversation between two users.
Instruction:\tGenerate 3 research activities based on the conversation.
Instruction:\tThe research activities should be doable by high school students.
Instruction:\tYour response should be a well-formed JSON array of 3 objects, each with a 'title' property and an 'activity' property.
Conversation:''' + f"\n{conversation}" + " Response:"
```
Inference:
```python
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generate_ids = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.0,
num_beams=2
)
response = tokenizer.decode(generate_ids[0],
skip_special_tokens=True,
clean_up_tokenization_spaces=True)
```
Example output (after response processing):
```json
[
{"title": "Understanding the Greenhouse Effect", "activity": "Research the greenhouse effect and the role of greenhouse gases in keeping Earth warm. Create a presentation or poster explaining the greenhouse effect and how greenhouse gases act as a blanket around Earth."},
{"title": "Renewable Energy Sources", "activity": "Identify different renewable energy sources, such as solar, wind, and geothermal energy, and explain how they can help reduce greenhouse gas emissions."},
{"title": "Energy Efficiency and Waste Reduction", "activity": "Research energy efficiency and waste reduction practices, and develop a plan to implement these practices in your school or community to reduce greenhouse gas emissions."}
]
```
## Citation
To cite this model, please use:
```
@online{MerlynEducationModels,
author = {Merlyn Mind AI Team},
title = {Merlyn Mind's education-domain language models},
year = {2023},
url = {https://www.merlyn.org/blog/merlyn-minds-education-specific-language-models},
urldate = {2023-06-26}
}
```
|
agustinl/dqn-SpaceInvadersNoFrameskip-v4
|
agustinl
| 2023-06-27T21:00:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T20:59:32Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 731.00 +/- 265.26
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga agustinl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga agustinl -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga agustinl
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
sanchit-gandhi/whisper-small-dv-4000-steps
|
sanchit-gandhi
| 2023-06-27T20:45:18Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T16:42:26Z |
---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 10.833883923914177
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3057
- Wer Ortho: 56.5986
- Wer: 10.8339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0657 | 3.26 | 1000 | 0.1608 | 58.2144 | 11.9727 |
| 0.0117 | 6.51 | 2000 | 0.2264 | 58.2213 | 11.2060 |
| 0.0014 | 9.77 | 3000 | 0.2866 | 57.3438 | 11.1069 |
| 0.0002 | 13.03 | 4000 | 0.3057 | 56.5986 | 10.8339 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1.dev0
- Tokenizers 0.13.3
|
evankomp/learn2therm
|
evankomp
| 2023-06-27T20:24:29Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"protein",
"thermostability",
"doi:10.57967/hf/0815",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-08T19:46:13Z |
---
license: mit
tags:
- protein
- thermostability
---
__Purpose__: classifies protein sequence into Thermophilic (>= 60C) or Mesophilic (<30C) by host organism growth temperature.
__Usage__:
Prepare sequences identically to using the original pretrained model:
```
from transformers import BertModelForSequenceClassification, BertTokenizer
import torch
import re
tokenizer = BertTokenizer.from_pretrained("evankomp/learn2therm", do_lower_case=False )
model = BertModelForSequenceClassification.from_pretrained("evankomp/learn2therm")
sequence_Example = "A E T C Z A O"
sequence_Example = re.sub(r"[UZOB]", "X", sequence_Example)
encoded_input = tokenizer(sequence_Example, return_tensors='pt')
output = torch.argmax(model(**encoded_input), dim=1)
```
1 indicates thermophilic, 0 mesophilic.
__Training__:
ProteinBERT (Rostlab/prot_bert) was fine tuned on a class balanced version of learn2therm (see [here]()), about 250k protein amino acid sequences.
Training parameters below:
```
TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=False,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=25,
eval_delay=0,
eval_steps=6,
evaluation_strategy=steps,
fp16=True,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=25,
gradient_checkpointing=True,
greater_is_better=False,
group_by_length=False,
half_precision_backend=cuda_amp,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=True,
local_rank=0,
log_level=info,
log_level_replica=passive,
log_on_each_node=True,
logging_dir=./data/ogt_protein_classifier/model/runs/Jun19_12-16-35_g3070,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=1,
logging_strategy=steps,
lr_scheduler_type=linear,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=loss,
mp_parameters=,
no_cuda=False,
num_train_epochs=2,
optim=adamw_hf,
optim_args=None,
output_dir=./data/ogt_protein_classifier/model,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=32,
per_device_train_batch_size=32,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
ray_scope=last,
remove_unused_columns=True,
report_to=['tensorboard', 'codecarbon'],
resume_from_checkpoint=None,
run_name=./data/ogt_protein_classifier/model,
save_on_each_node=False,
save_steps=6,
save_strategy=steps,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
torch_compile=False,
torch_compile_backend=None,
torch_compile_mode=None,
torchdynamo=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_ipex=False,
use_legacy_prediction_loop=False,
use_mps_device=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
```
See the [training repository](https://github.com/BeckResearchLab/learn2thermML) for code.
|
emresvd/u213
|
emresvd
| 2023-06-27T20:14:55Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2023-06-27T20:14:50Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
renyulin/gptneo125M-es-sft-lora8bit
|
renyulin
| 2023-06-27T20:04:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T20:04:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
YakovElm/Apache_5_BERT_Over_Sampling
|
YakovElm
| 2023-06-27T19:58:16Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T19:57:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache_5_BERT_Over_Sampling
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache_5_BERT_Over_Sampling
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0327
- Train Accuracy: 0.9875
- Validation Loss: 1.1880
- Validation Accuracy: 0.7549
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4729 | 0.7476 | 0.7658 | 0.7353 | 0 |
| 0.0872 | 0.9687 | 1.0498 | 0.8096 | 1 |
| 0.0327 | 0.9875 | 1.1880 | 0.7549 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
kgBolt/AIdungeon_draft
|
kgBolt
| 2023-06-27T19:37:32Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T19:37:30Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Sreyes76/distilbert-base-uncased-finetuned-emotion
|
Sreyes76
| 2023-06-27T19:37:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-20T23:20:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9231096192856936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8281 | 1.0 | 250 | 0.3067 | 0.908 | 0.9055 |
| 0.2466 | 2.0 | 500 | 0.2160 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
DionnisB/colab
|
DionnisB
| 2023-06-27T19:20:35Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-05T13:31:42Z |
---
license: creativeml-openrail-m
---
|
Bodolaz/Unit-4.2-final2
|
Bodolaz
| 2023-06-27T19:17:52Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T19:17:30Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit-4.2-final2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 26.20 +/- 25.13
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
leniero/gmag
|
leniero
| 2023-06-27T19:08:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"gmag",
"queer",
"brazil",
"en",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-06T23:39:36Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
tags:
- gmag
- queer
- brazil
---
|
ivyraine/test_model
|
ivyraine
| 2023-06-27T19:01:17Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"region:us"
] | null | 2023-06-27T19:00:36Z |
---
library_name: adapter-transformers
---
|
facebook/galactica-125m
|
facebook
| 2023-06-27T19:00:15Z | 2,773 | 36 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"galactica",
"arxiv:1810.03993",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2022-11-16T13:21:41Z |
---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 125M (mini)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-125m")
model = OPTForCausalLM.from_pretrained("facebook/galactica-125m", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
```
|
facebook/hubert-xlarge-ls960-ft
|
facebook
| 2023-06-27T18:52:32Z | 6,537 | 12 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"hubert",
"automatic-speech-recognition",
"speech",
"audio",
"hf-asr-leaderboard",
"en",
"dataset:libri-light",
"dataset:librispeech_asr",
"arxiv:2106.07447",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- libri-light
- librispeech_asr
tags:
- speech
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: hubert-large-ls960-ft
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 1.8
---
# Hubert-Extra-Large-Finetuned
[Facebook's Hubert](https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression)
The extra large model fine-tuned on 960h of Librispeech on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
The model is a fine-tuned version of [hubert-xlarge-ll60k](https://huggingface.co/facebook/hubert-xlarge-ll60k).
[Paper](https://arxiv.org/abs/2106.07447)
Authors: Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed
**Abstract**
Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets.
The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/hubert .
# Usage
The model can be used for automatic-speech-recognition as follows:
```python
import torch
from transformers import Wav2Vec2Processor, HubertForCTC
from datasets import load_dataset
processor = Wav2Vec2Processor.from_pretrained("facebook/hubert-xlarge-ls960-ft")
model = HubertForCTC.from_pretrained("facebook/hubert-xlarge-ls960-ft")
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.decode(predicted_ids[0])
# ->"A MAN SAID TO THE UNIVERSE SIR I EXIST"
```
|
derek-thomas/distilhubert-finetuned-gtzan
|
derek-thomas
| 2023-06-27T18:47:55Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-27T16:58:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7072
- Accuracy: 0.81
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0694 | 1.0 | 57 | 2.0452 | 0.42 |
| 1.6795 | 2.0 | 114 | 1.5549 | 0.55 |
| 1.1745 | 3.0 | 171 | 1.2160 | 0.73 |
| 1.1069 | 4.0 | 228 | 1.0979 | 0.73 |
| 0.7755 | 5.0 | 285 | 0.9282 | 0.73 |
| 0.7111 | 6.0 | 342 | 0.8393 | 0.78 |
| 0.5609 | 7.0 | 399 | 0.7911 | 0.79 |
| 0.4891 | 8.0 | 456 | 0.7098 | 0.81 |
| 0.518 | 9.0 | 513 | 0.7079 | 0.8 |
| 0.5737 | 10.0 | 570 | 0.7072 | 0.81 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
schwana1/guessDoggos
|
schwana1
| 2023-06-27T18:43:06Z | 3 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-16T12:31:16Z |
---
pipeline_tag: image-classification
---
|
Hans14/poca-SoccerTwos
|
Hans14
| 2023-06-27T18:34:51Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-27T18:33:59Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: Hans14/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
GabrielFerreira/ppo-Huggy
|
GabrielFerreira
| 2023-06-27T18:27:46Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-27T18:27:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GabrielFerreira/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jumartineze/distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
|
Jumartineze
| 2023-06-27T18:15:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T16:54:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-MeIA-AnalisisDeSentimientos
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0252
- F1: 0.5395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0599 | 1.0 | 766 | 1.0518 | 0.5080 |
| 0.9391 | 2.0 | 1532 | 1.0252 | 0.5395 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
maidacundo/falcon_qlora_r2_sql_no_schema
|
maidacundo
| 2023-06-27T18:09:11Z | 0 | 0 | null |
[
"generated_from_trainer",
"dataset:spider",
"license:apache-2.0",
"region:us"
] | null | 2023-06-27T17:13:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- spider
model-index:
- name: falcon_qlora_r2_sql_no_schema
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon_qlora_r2_sql_no_schema
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on the spider dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 43.7
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
eluzhnica/mpt-30b-peft-compatible
|
eluzhnica
| 2023-06-27T18:08:52Z | 11 | 8 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:allenai/c4",
"dataset:mc4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack-dedup",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-26T20:51:20Z |
---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- allenai/c4
- mc4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack-dedup
- allenai/s2orc
inference: false
---
# MPT-30B
This is the MPT-30B but with added support to finetune using peft (tested with qlora). It is not finetuned further, the weights are the same as the original MPT-30b.
I have not traced through the whole huggingface stack to see if this is working correctly but it does finetune with qlora and outputs are reasonable.
Inspired by implementations here https://huggingface.co/cekal/mpt-7b-peft-compatible/commits/main
https://huggingface.co/mosaicml/mpt-7b/discussions/42.
The original description for MosaicML team below:
MPT-30B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-30B is part of the family of Mosaic Pretrained Transformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
MPT-30B comes with special features that differentiate it from other LLMs, including an 8k token context window (which can be further extended via finetuning; see [MPT-7B-StoryWriter](https://huggingface.co/mosaicml/mpt-7b-storywriter)), support for context-length extrapolation via [ALiBi](https://arxiv.org/abs/2108.12409), and efficient inference + training via FlashAttention. It also has strong coding abilities thanks to its pretraining mix. MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
The size of MPT-30B was also specifically chosen to make it easy to deploy on a single GPU—either 1xA100-80GB in 16-bit precision or 1xA100-40GB in 8-bit precision.
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-30B is:
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-30B:
The following models are finetuned on MPT-30B:
* [MPT-30B-Instruct](https://huggingface.co/mosaicml/mpt-30b-instruct): a model for short-form instruction following.
Built by finetuning MPT-30B on several carefully curated datasets.
* License: _CC-BY-SA-3.0_
* [MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-30B on [ShareGPT-Vicuna](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered), [Camel-AI](https://huggingface.co/camel-ai),
[GPTeacher](https://github.com/teknium1/GPTeacher), [Guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco), [Baize](https://github.com/project-baize/baize-chatbot) and some generated datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-30b-chat)
## Model Date
June 22, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: MPT-30B: Raising the bar for open-source foundation models](https://www.mosaicml.com/blog/mpt-30b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-30b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton' # change this to use triton-based FlashAttention
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
The model was trained initially with a sequence length of 2048 with an additional pretraining stage for sequence length adapation up to 8192. However, ALiBi enables users to increase the maximum sequence length even further during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-30b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 16384 # (input + output) tokens can now be up to 16384
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the MPT-30B tokenizer which is identical to the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mosaicml/mpt-30b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
with torch.autocast('cuda', dtype=torch.bfloat16):
inputs = tokenizer('Here is a recipe for vegan banana bread:\n', return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
# or using the HF pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 29.95B |
|n_layers | 48 |
| n_heads | 64 |
| d_model | 7168 |
| vocab size | 50432 |
| sequence length | 8192 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English (200+ words) | 2417.99 B | 33.50% | 335 B | 0.14 |
| c4 - English - SemDedup 80% | 100.42 B | 29.90% | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 8.50% | 85 B | 0.097 |
| The Stack - Selected Languages | 463.78 B | 10.00% | 100 B | 0.22 |
| RedPajama - Wikipedia | 4.87 B | 4.00% | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 4.50% | 45 B | 0.42 |
| Semantic Scholar ORC | 48.95 B | 3.30% | 33 B | 0.67 |
| RedPajama - Books | 26.02 B | 3.00% | 30 B | 1.15 |
| RedPajama - arXiv | 28.10 B | 1.90% | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 1.40% | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above. The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the sequence length. To build 8k support into MPT-30B efficiently, we first pre-trained on 1T tokens using sequences that were 2k tokens long, and then trained for an additional 50B tokens using sequences that were 8k tokens long.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)).
### Training Configuration
The model was trained in three stages using the [MosaicML Platform](https://www.mosaicml.com/platform):
(i) First it was trained on 440 A100-40GBs with a batch size of 1760.
(ii) Then, on 216 A100-40GBs with a batch size of 1728.
(iii) Training was completed on 256 H100-80GBs with a batch size of 512 with 8k context length and 50B tokens.
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-30B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-30B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-30B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-30b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-30B: Raising the bar
for open-source foundation models},
year = {2023},
url = {www.mosaicml.com/blog/mpt-30b},
note = {Accessed: 2023-06-22},
urldate = {2023-06-22}
}
```
|
winterForestStump/Roberta-fake-news-detector
|
winterForestStump
| 2023-06-27T17:53:47Z | 137 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"en",
"license:gpl-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T09:46:03Z |
---
license: gpl-2.0
language:
- en
tags:
- text-classification
widget:
- text: "According to the former prime minister of Italy, Mario Draghi, no one in the EU needs peace or negotiations, only the total defeat of Russia, and the destroyed Ukraine will just be collateral damage of the EU ambitions."
example_title: "Fake news"
---
# Fake News Recognition
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned Roberta model 'jy46604790/Fake-News-Bert-Detect' (https://huggingface.co/jy46604790/Fake-News-Bert-Detect).
This model is trained by 8 000 news articles from https://euvsdisinfo.eu/ portal.
It can give result by simply entering the text of the news less than 512 words(the excess will be truncated automatically).
Labels:
* 0: Fake news
* 1: Real news
## How to Get Started with the Model
Use the code below to get started with the model.
### Download The Model
```
from transformers import pipeline
MODEL = "winterForestStump/Roberta-fake-news-detector"
clf = pipeline("text-classification", model=MODEL, tokenizer=MODEL)
```
### Feed Data
```
text = "From the very beginning, the EU has been extremely non-transparent. The deployment of the European Union presence in Armenia was carried out forcefully, under serious pressure from Brussels"
```
### Result
```
result = clf(text)
result
```
### Output
```
[{'label': 'FAKE', 'score': 0.9999946355819702}]
```
About the data source EUVSDISINFO.eu:
Using data analysis and media monitoring services in multiple languages, EUvsDisinfo identifies, compiles, and exposes disinformation cases originating in pro-Kremlin outlets. These cases (and their disproofs) are collected in the EUvsDisinfo database – the only searchable, open-source repository of its kind. The database is updated every week.
|
chriskim2273/test_headline_qa
|
chriskim2273
| 2023-06-27T17:53:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T17:31:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test_headline_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_headline_qa
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.7992 |
| No log | 2.0 | 4 | 5.7051 |
| No log | 3.0 | 6 | 5.6068 |
| No log | 4.0 | 8 | 5.5043 |
| No log | 5.0 | 10 | 5.3968 |
| No log | 6.0 | 12 | 5.2848 |
| No log | 7.0 | 14 | 5.1784 |
| No log | 8.0 | 16 | 5.0876 |
| No log | 9.0 | 18 | 5.0222 |
| No log | 10.0 | 20 | 4.9920 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zofiski/squad-bloom-3b
|
zofiski
| 2023-06-27T17:36:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T17:36:33Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
foch3/watermash
|
foch3
| 2023-06-27T17:35:54Z | 0 | 1 | null |
[
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-25T05:38:40Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
---
**Please read creativeml-openrail-m license before using it.**
A model combined with Watersmudge LoRa(https://huggingface.co/foch3/Watersmudge)
Basically this one merged for generate landscape painting. So maybe not really good for figure painting...
**watercolor prompts(*recommend*):** (watercolor (medium):1.2), ink wash painting, (sketch:1.2)
**Samples:**
Sample Image *with* watercolor prompts
<img src="https://huggingface.co/foch3/watermash/resolve/main/1-1.png">
<img src="https://huggingface.co/foch3/watermash/resolve/main/2-1.png">
Sample Image *without* watercolor prompts
<img src="https://huggingface.co/foch3/watermash/resolve/main/1-2.png">
<img src="https://huggingface.co/foch3/watermash/resolve/main/2-2.png">
Figure painting samples
<img src="https://huggingface.co/foch3/watermash/resolve/main/f1.png">
<img src="https://huggingface.co/foch3/watermash/resolve/main/f2.png">
|
Nekochu/Wav2Lip
|
Nekochu
| 2023-06-27T17:32:53Z | 0 | 1 | null |
[
"arxiv:2008.10010",
"region:us"
] | null | 2023-06-27T17:25:26Z |
Original upload: https://github.com/Rudrabha/Wav2Lip
# **Wav2Lip**: *Accurately Lip-syncing Videos In The Wild*
For commercial requests, please contact us at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in. We have an HD model ready that can be used commercially.
This code is part of the paper: _A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild_ published at ACM Multimedia 2020.
[](https://paperswithcode.com/sota/lip-sync-on-lrs2?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrs3?p=a-lip-sync-expert-is-all-you-need-for-speech)
[](https://paperswithcode.com/sota/lip-sync-on-lrw?p=a-lip-sync-expert-is-all-you-need-for-speech)
|📑 Original Paper|📰 Project Page|🌀 Demo|⚡ Live Testing|📔 Colab Notebook
|:-:|:-:|:-:|:-:|:-:|
[Paper](http://arxiv.org/abs/2008.10010) | [Project Page](http://cvit.iiit.ac.in/research/projects/cvit-projects/a-lip-sync-expert-is-all-you-need-for-speech-to-lip-generation-in-the-wild/) | [Demo Video](https://youtu.be/0fXaDCZNOJc) | [Interactive Demo](https://bhaasha.iiit.ac.in/lipsync) | [Colab Notebook](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing) /[Updated Collab Notebook](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH)
<img src="https://drive.google.com/uc?export=view&id=1Wn0hPmpo4GRbCIJR8Tf20Akzdi1qjjG9"/>
----------
**Highlights**
----------
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy :100:. Try our [interactive demo](https://bhaasha.iiit.ac.in/lipsync).
- :sparkles: Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available :boom:
- Or, quick-start with the Google Colab Notebook: [Link](https://colab.research.google.com/drive/1tZpDWXz49W6wDcTprANRGLo2D_EbD5J8?usp=sharing). Checkpoints and samples are available in a Google Drive [folder](https://drive.google.com/drive/folders/1I-0dNLfFOSFwrfqjNa-SXuwaURHE5K4k?usp=sharing) as well. There is also a [tutorial video](https://www.youtube.com/watch?v=Ic0TBhfuOrA) on this, courtesy of [What Make Art](https://www.youtube.com/channel/UCmGXH-jy0o2CuhqtpxbaQgA). Also, thanks to [Eyal Gruss](https://eyalgruss.com), there is a more accessible [Google Colab notebook](https://j.mp/wav2lip) with more useful features. A tutorial collab notebook is present at this [link](https://colab.research.google.com/drive/1IjFW1cLevs6Ouyu4Yht4mnR4yeuMqO7Y#scrollTo=MH1m608OymLH).
- :fire: :fire: Several new, reliable evaluation benchmarks and metrics [[`evaluation/` folder of this repo]](https://github.com/Rudrabha/Wav2Lip/tree/master/evaluation) released. Instructions to calculate the metrics reported in the paper are also present.
--------
**Disclaimer**
--------
All results from this open-source code or our [demo website](https://bhaasha.iiit.ac.in/lipsync) should only be used for research/academic/personal purposes only. As the models are trained on the <a href="http://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html">LRS2 dataset</a>, any form of commercial use is strictly prohibhited. For commercial requests please contact us directly!
Prerequisites
-------------
- `Python 3.6`
- ffmpeg: `sudo apt-get install ffmpeg`
- Install necessary packages using `pip install -r requirements.txt`. Alternatively, instructions for using a docker image is provided [here](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668). Have a look at [this comment](https://github.com/Rudrabha/Wav2Lip/issues/131#issuecomment-725478562) and comment on [the gist](https://gist.github.com/xenogenesi/e62d3d13dadbc164124c830e9c453668) if you encounter any issues.
- Face detection [pre-trained model](https://www.adrianbulat.com/downloads/python-fan/s3fd-619a316812.pth) should be downloaded to `face_detection/detection/sfd/s3fd.pth`. Alternative [link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/prajwal_k_research_iiit_ac_in/EZsy6qWuivtDnANIG73iHjIBjMSoojcIV0NULXV-yiuiIg?e=qTasa8) if the above does not work.
Getting the weights
----------
| Model | Description | Link to the model |
| :-------------: | :---------------: | :---------------: |
| Wav2Lip | Highly accurate lip-sync | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/Eb3LEzbfuKlJiR600lQWRxgBIY27JZg80f7V9jtMfbNDaQ?e=TBFBVW) |
| Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EdjI7bZlgApMqsVoEUUXpLsBxqXbn5z8VTmoxp55YNDcIA?e=n9ljGW) |
| Expert Discriminator | Weights of the expert discriminator | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQRvmiZg-HRAjvI6zqN9eTEBP74KefynCwPWVmF57l-AYA?e=ZRPHKP) |
| Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | [Link](https://iiitaphyd-my.sharepoint.com/:u:/g/personal/radrabha_m_research_iiit_ac_in/EQVqH88dTm1HjlK11eNba5gBbn15WMS0B0EZbDBttqrqkg?e=ic0ljo) |
Lip-syncing videos using the pre-trained models (Inference)
-------
You can lip-sync any video to any audio:
```bash
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
```
The result is saved (by default) in `results/result_voice.mp4`. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by `FFMPEG` containing audio data: `*.wav`, `*.mp3` or even a video file, from which the code will automatically extract the audio.
##### Tips for better results:
- Experiment with the `--pads` argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g. `--pads 0 20 0 0`.
- If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the `--nosmooth` argument and give another try.
- Experiment with the `--resize_factor` argument, to get a lower resolution video. Why? The models are trained on faces which were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too).
- The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
Preparing LRS2 for training
----------
Our models are trained on LRS2. See [here](#training-on-datasets-other-than-lrs2) for a few suggestions regarding training on other datasets.
##### LRS2 dataset folder structure
```
data_root (mvlrs_v1)
├── main, pretrain (we use only main folder in this work)
| ├── list of folders
| │ ├── five-digit numbered video IDs ending with (.mp4)
```
Place the LRS2 filelists (train, val, test) `.txt` files in the `filelists/` folder.
##### Preprocess the dataset for fast training
```bash
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
```
Additional options like `batch_size` and number of GPUs to use in parallel to use can also be set.
##### Preprocessed LRS2 folder structure
```
preprocessed_root (lrs2_preprocessed)
├── list of folders
| ├── Folders with five-digit numbered video IDs
| │ ├── *.jpg
| │ ├── audio.wav
```
Train!
----------
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
##### Training the expert discriminator
You can download [the pre-trained weights](#getting-the-weights) if you want to skip this step. To train it:
```bash
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
```
##### Training the Wav2Lip models
You can either train the model without the additional visual quality disriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
```bash
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
```
To train with the visual quality discriminator, you should run `hq_wav2lip_train.py` instead. The arguments for both the files are similar. In both the cases, you can resume training as well. Look at `python wav2lip_train.py --help` for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the `hparams.py` file.
Training on datasets other than LRS2
------------------------------------
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
When raising an issue on this topic, please let us know that you are aware of all these points.
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
Evaluation
----------
Please check the `evaluation/` folder for the instructions.
and Citation
----------
Theis repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
```
@inproceedings{10.1145/3394171.3413532,
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413532},
doi = {10.1145/3394171.3413532},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {484–492},
numpages = {9},
keywords = {lip sync, talking face generation, video generation},
location = {Seattle, WA, USA},
series = {MM '20}
}
```
Acknowledgements
----------
Parts of the code structure is inspired by this [TTS repository](https://github.com/r9y9/deepvoice3_pytorch). We thank the author for this wonderful code. The code for Face Detection has been taken from the [face_alignment](https://github.com/1adrianb/face-alignment) repository. We thank the authors for releasing their code and models. We thank [zabique](https://github.com/zabique) for the tutorial collab notebook.
|
mnavas/beto-finetuned-token-reqadjzar
|
mnavas
| 2023-06-27T17:23:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-07T15:07:50Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: beto-finetuned-token-reqadjzar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beto-finetuned-token-reqadjzar
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1061
- Precision: 0.2533
- Recall: 0.3333
- F1: 0.2879
- Accuracy: 0.8498
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.7331 | 1.0 | 24 | 0.5920 | 0.0 | 0.0 | 0.0 | 0.7532 |
| 0.4759 | 2.0 | 48 | 0.3954 | 0.0085 | 0.0175 | 0.0115 | 0.8321 |
| 0.3186 | 3.0 | 72 | 0.5127 | 0.0188 | 0.0702 | 0.0296 | 0.8159 |
| 0.1906 | 4.0 | 96 | 0.4865 | 0.1190 | 0.2632 | 0.1639 | 0.8509 |
| 0.145 | 5.0 | 120 | 0.4650 | 0.1597 | 0.3333 | 0.2159 | 0.8760 |
| 0.1107 | 6.0 | 144 | 0.5465 | 0.1062 | 0.2105 | 0.1412 | 0.8514 |
| 0.0903 | 7.0 | 168 | 0.5441 | 0.1359 | 0.2456 | 0.175 | 0.8796 |
| 0.0698 | 8.0 | 192 | 0.4353 | 0.1204 | 0.2281 | 0.1576 | 0.8842 |
| 0.0505 | 9.0 | 216 | 0.7170 | 0.19 | 0.3333 | 0.2420 | 0.8432 |
| 0.0687 | 10.0 | 240 | 0.5893 | 0.1963 | 0.3684 | 0.2561 | 0.8860 |
| 0.039 | 11.0 | 264 | 0.5877 | 0.1951 | 0.4211 | 0.2667 | 0.8780 |
| 0.0278 | 12.0 | 288 | 0.5715 | 0.2237 | 0.2982 | 0.2556 | 0.8577 |
| 0.0354 | 13.0 | 312 | 0.9535 | 0.2283 | 0.3684 | 0.2819 | 0.8532 |
| 0.024 | 14.0 | 336 | 0.6500 | 0.2169 | 0.3158 | 0.2571 | 0.8674 |
| 0.0223 | 15.0 | 360 | 0.7513 | 0.1855 | 0.4035 | 0.2541 | 0.8722 |
| 0.0156 | 16.0 | 384 | 0.6566 | 0.3 | 0.4737 | 0.3673 | 0.9012 |
| 0.0156 | 17.0 | 408 | 0.8436 | 0.2292 | 0.3860 | 0.2876 | 0.8696 |
| 0.0189 | 18.0 | 432 | 0.8043 | 0.1711 | 0.2281 | 0.1955 | 0.8181 |
| 0.0128 | 19.0 | 456 | 0.6518 | 0.1619 | 0.2982 | 0.2099 | 0.8814 |
| 0.0122 | 20.0 | 480 | 0.8418 | 0.2347 | 0.4035 | 0.2968 | 0.8793 |
| 0.0242 | 21.0 | 504 | 0.7948 | 0.2292 | 0.3860 | 0.2876 | 0.8814 |
| 0.0124 | 22.0 | 528 | 0.8059 | 0.2037 | 0.3860 | 0.2667 | 0.8842 |
| 0.0098 | 23.0 | 552 | 0.9458 | 0.1765 | 0.2632 | 0.2113 | 0.8584 |
| 0.0287 | 24.0 | 576 | 0.7110 | 0.1488 | 0.3158 | 0.2022 | 0.8825 |
| 0.0253 | 25.0 | 600 | 0.6823 | 0.2021 | 0.3333 | 0.2517 | 0.8781 |
| 0.0151 | 26.0 | 624 | 0.7382 | 0.2022 | 0.3158 | 0.2466 | 0.8791 |
| 0.0118 | 27.0 | 648 | 0.6036 | 0.2360 | 0.3684 | 0.2877 | 0.8965 |
| 0.0102 | 28.0 | 672 | 0.9152 | 0.1765 | 0.3158 | 0.2264 | 0.8446 |
| 0.0229 | 29.0 | 696 | 0.6878 | 0.2584 | 0.4035 | 0.3151 | 0.8982 |
| 0.0168 | 30.0 | 720 | 0.7333 | 0.2784 | 0.4737 | 0.3506 | 0.8937 |
| 0.0145 | 31.0 | 744 | 0.6051 | 0.1864 | 0.3860 | 0.2514 | 0.9 |
| 0.0207 | 32.0 | 768 | 0.9083 | 0.3279 | 0.3509 | 0.3390 | 0.8894 |
| 0.0191 | 33.0 | 792 | 0.6983 | 0.2222 | 0.3509 | 0.2721 | 0.8884 |
| 0.0103 | 34.0 | 816 | 0.7287 | 0.2449 | 0.4211 | 0.3097 | 0.8840 |
| 0.0091 | 35.0 | 840 | 0.5929 | 0.2184 | 0.3333 | 0.2639 | 0.8851 |
| 0.0059 | 36.0 | 864 | 0.7604 | 0.2421 | 0.4035 | 0.3026 | 0.8810 |
| 0.0035 | 37.0 | 888 | 0.9380 | 0.2143 | 0.3684 | 0.2710 | 0.8622 |
| 0.0025 | 38.0 | 912 | 0.9824 | 0.2 | 0.3509 | 0.2548 | 0.8704 |
| 0.0059 | 39.0 | 936 | 1.0658 | 0.2796 | 0.4561 | 0.3467 | 0.8669 |
| 0.0199 | 40.0 | 960 | 0.9755 | 0.1705 | 0.3860 | 0.2366 | 0.8449 |
| 0.0034 | 41.0 | 984 | 0.9697 | 0.2619 | 0.3860 | 0.3121 | 0.8656 |
| 0.0035 | 42.0 | 1008 | 1.0582 | 0.1959 | 0.3333 | 0.2468 | 0.8461 |
| 0.0088 | 43.0 | 1032 | 0.8500 | 0.1849 | 0.3860 | 0.25 | 0.8515 |
| 0.0263 | 44.0 | 1056 | 1.2832 | 0.2 | 0.3509 | 0.2548 | 0.8255 |
| 0.0088 | 45.0 | 1080 | 0.9282 | 0.2308 | 0.4211 | 0.2981 | 0.8534 |
| 0.0343 | 46.0 | 1104 | 0.7165 | 0.2222 | 0.3158 | 0.2609 | 0.8594 |
| 0.0024 | 47.0 | 1128 | 0.7355 | 0.2308 | 0.4737 | 0.3103 | 0.8782 |
| 0.0019 | 48.0 | 1152 | 0.6493 | 0.2165 | 0.3684 | 0.2727 | 0.8779 |
| 0.0009 | 49.0 | 1176 | 0.6999 | 0.1964 | 0.3860 | 0.2604 | 0.8766 |
| 0.0008 | 50.0 | 1200 | 0.7496 | 0.2062 | 0.3509 | 0.2597 | 0.8709 |
| 0.0009 | 51.0 | 1224 | 0.7670 | 0.2019 | 0.3684 | 0.2609 | 0.8750 |
| 0.0006 | 52.0 | 1248 | 0.7549 | 0.24 | 0.4211 | 0.3057 | 0.8832 |
| 0.0007 | 53.0 | 1272 | 0.7556 | 0.2706 | 0.4035 | 0.3239 | 0.8870 |
| 0.0007 | 54.0 | 1296 | 0.7188 | 0.1695 | 0.3509 | 0.2286 | 0.8833 |
| 0.0005 | 55.0 | 1320 | 0.7120 | 0.1927 | 0.3684 | 0.2530 | 0.8822 |
| 0.0009 | 56.0 | 1344 | 0.7377 | 0.2245 | 0.3860 | 0.2839 | 0.8819 |
| 0.0008 | 57.0 | 1368 | 0.7295 | 0.2277 | 0.4035 | 0.2911 | 0.8859 |
| 0.0009 | 58.0 | 1392 | 0.7158 | 0.2340 | 0.3860 | 0.2914 | 0.8900 |
| 0.0013 | 59.0 | 1416 | 0.6715 | 0.1897 | 0.3860 | 0.2543 | 0.8941 |
| 0.0006 | 60.0 | 1440 | 0.6787 | 0.21 | 0.3684 | 0.2675 | 0.8861 |
| 0.0007 | 61.0 | 1464 | 0.6794 | 0.2584 | 0.4035 | 0.3151 | 0.8940 |
| 0.0012 | 62.0 | 1488 | 0.6823 | 0.2273 | 0.3509 | 0.2759 | 0.8778 |
| 0.0008 | 63.0 | 1512 | 0.7189 | 0.2588 | 0.3860 | 0.3099 | 0.8791 |
| 0.0008 | 64.0 | 1536 | 0.7077 | 0.2371 | 0.4035 | 0.2987 | 0.8905 |
| 0.0007 | 65.0 | 1560 | 0.7201 | 0.2738 | 0.4035 | 0.3262 | 0.8860 |
| 0.0005 | 66.0 | 1584 | 0.7339 | 0.2584 | 0.4035 | 0.3151 | 0.8894 |
| 0.0005 | 67.0 | 1608 | 0.7490 | 0.2157 | 0.3860 | 0.2767 | 0.8845 |
| 0.0006 | 68.0 | 1632 | 0.7342 | 0.2162 | 0.4211 | 0.2857 | 0.8833 |
| 0.0012 | 69.0 | 1656 | 0.7287 | 0.3108 | 0.4035 | 0.3511 | 0.8895 |
| 0.0012 | 70.0 | 1680 | 0.8877 | 0.2079 | 0.3684 | 0.2658 | 0.8615 |
| 0.0007 | 71.0 | 1704 | 0.9370 | 0.2095 | 0.3860 | 0.2716 | 0.8644 |
| 0.002 | 72.0 | 1728 | 0.7715 | 0.2391 | 0.3860 | 0.2953 | 0.8677 |
| 0.0007 | 73.0 | 1752 | 0.8765 | 0.22 | 0.3860 | 0.2803 | 0.8628 |
| 0.0006 | 74.0 | 1776 | 0.8515 | 0.2371 | 0.4035 | 0.2987 | 0.8639 |
| 0.0007 | 75.0 | 1800 | 0.8448 | 0.2286 | 0.4211 | 0.2963 | 0.8633 |
| 0.0009 | 76.0 | 1824 | 0.8501 | 0.2232 | 0.4386 | 0.2959 | 0.8650 |
| 0.0007 | 77.0 | 1848 | 0.8550 | 0.2198 | 0.3509 | 0.2703 | 0.8657 |
| 0.0005 | 78.0 | 1872 | 0.7445 | 0.25 | 0.4035 | 0.3087 | 0.8780 |
| 0.0007 | 79.0 | 1896 | 0.8889 | 0.26 | 0.4561 | 0.3312 | 0.8630 |
| 0.0005 | 80.0 | 1920 | 0.8930 | 0.2812 | 0.4737 | 0.3529 | 0.8650 |
| 0.0004 | 81.0 | 1944 | 0.8678 | 0.26 | 0.4561 | 0.3312 | 0.8745 |
| 0.0005 | 82.0 | 1968 | 0.8747 | 0.2784 | 0.4737 | 0.3506 | 0.8746 |
| 0.0005 | 83.0 | 1992 | 0.8726 | 0.2872 | 0.4737 | 0.3576 | 0.8687 |
| 0.001 | 84.0 | 2016 | 0.8887 | 0.2857 | 0.4211 | 0.3404 | 0.8693 |
| 0.0006 | 85.0 | 2040 | 0.7915 | 0.2963 | 0.4211 | 0.3478 | 0.8821 |
| 0.0007 | 86.0 | 2064 | 1.0194 | 0.2857 | 0.4211 | 0.3404 | 0.8606 |
| 0.0009 | 87.0 | 2088 | 0.7594 | 0.2366 | 0.3860 | 0.2933 | 0.8777 |
| 0.0021 | 88.0 | 2112 | 0.9788 | 0.25 | 0.3333 | 0.2857 | 0.8539 |
| 0.0012 | 89.0 | 2136 | 0.8719 | 0.2093 | 0.3158 | 0.2517 | 0.8697 |
| 0.0019 | 90.0 | 2160 | 1.1859 | 0.1810 | 0.3684 | 0.2428 | 0.8111 |
| 0.001 | 91.0 | 2184 | 0.9690 | 0.2118 | 0.3158 | 0.2535 | 0.8421 |
| 0.0007 | 92.0 | 2208 | 0.9863 | 0.1880 | 0.3860 | 0.2529 | 0.8495 |
| 0.0006 | 93.0 | 2232 | 0.9942 | 0.1868 | 0.2982 | 0.2297 | 0.8641 |
| 0.0007 | 94.0 | 2256 | 1.0118 | 0.2159 | 0.3333 | 0.2621 | 0.8637 |
| 0.0007 | 95.0 | 2280 | 1.0435 | 0.2754 | 0.3333 | 0.3016 | 0.8615 |
| 0.0008 | 96.0 | 2304 | 0.9795 | 0.2471 | 0.3684 | 0.2958 | 0.8657 |
| 0.0007 | 97.0 | 2328 | 0.9189 | 0.2020 | 0.3509 | 0.2564 | 0.8807 |
| 0.0009 | 98.0 | 2352 | 0.9240 | 0.2273 | 0.3509 | 0.2759 | 0.8762 |
| 0.0005 | 99.0 | 2376 | 0.8891 | 0.2561 | 0.3684 | 0.3022 | 0.8821 |
| 0.0004 | 100.0 | 2400 | 0.9028 | 0.2469 | 0.3509 | 0.2899 | 0.8818 |
| 0.0004 | 101.0 | 2424 | 0.9228 | 0.2410 | 0.3509 | 0.2857 | 0.8830 |
| 0.0004 | 102.0 | 2448 | 0.9409 | 0.2278 | 0.3158 | 0.2647 | 0.8795 |
| 0.0006 | 103.0 | 2472 | 0.9777 | 0.24 | 0.3158 | 0.2727 | 0.8796 |
| 0.0005 | 104.0 | 2496 | 0.9872 | 0.2432 | 0.3158 | 0.2748 | 0.8791 |
| 0.0006 | 105.0 | 2520 | 0.9820 | 0.2329 | 0.2982 | 0.2615 | 0.8746 |
| 0.0006 | 106.0 | 2544 | 1.0301 | 0.2879 | 0.3333 | 0.3089 | 0.8702 |
| 0.0006 | 107.0 | 2568 | 1.0468 | 0.3226 | 0.3509 | 0.3361 | 0.8637 |
| 0.0004 | 108.0 | 2592 | 1.0155 | 0.2941 | 0.3509 | 0.3200 | 0.8683 |
| 0.0005 | 109.0 | 2616 | 0.9970 | 0.2821 | 0.3860 | 0.3259 | 0.8678 |
| 0.0004 | 110.0 | 2640 | 1.0453 | 0.28 | 0.3684 | 0.3182 | 0.8687 |
| 0.0009 | 111.0 | 2664 | 0.9247 | 0.2278 | 0.3158 | 0.2647 | 0.8747 |
| 0.0006 | 112.0 | 2688 | 0.8811 | 0.2785 | 0.3860 | 0.3235 | 0.8921 |
| 0.0005 | 113.0 | 2712 | 0.9462 | 0.1905 | 0.2807 | 0.2270 | 0.8817 |
| 0.0005 | 114.0 | 2736 | 0.9685 | 0.2078 | 0.2807 | 0.2388 | 0.8792 |
| 0.0006 | 115.0 | 2760 | 1.0339 | 0.2712 | 0.2807 | 0.2759 | 0.8672 |
| 0.0004 | 116.0 | 2784 | 1.0155 | 0.2571 | 0.3158 | 0.2835 | 0.8687 |
| 0.0005 | 117.0 | 2808 | 0.9998 | 0.25 | 0.3509 | 0.2920 | 0.8768 |
| 0.0006 | 118.0 | 2832 | 0.9849 | 0.2473 | 0.4035 | 0.3067 | 0.8715 |
| 0.0033 | 119.0 | 2856 | 0.7929 | 0.2376 | 0.4211 | 0.3038 | 0.8832 |
| 0.0485 | 120.0 | 2880 | 0.9585 | 0.2 | 0.2807 | 0.2336 | 0.8585 |
| 0.0114 | 121.0 | 2904 | 0.7619 | 0.2472 | 0.3860 | 0.3014 | 0.8831 |
| 0.0177 | 122.0 | 2928 | 0.7737 | 0.2881 | 0.2982 | 0.2931 | 0.8688 |
| 0.02 | 123.0 | 2952 | 1.1362 | 0.1959 | 0.3333 | 0.2468 | 0.8214 |
| 0.0056 | 124.0 | 2976 | 1.2073 | 0.3659 | 0.2632 | 0.3061 | 0.8277 |
| 0.0208 | 125.0 | 3000 | 0.8549 | 0.2162 | 0.2807 | 0.2443 | 0.8430 |
| 0.0066 | 126.0 | 3024 | 0.9482 | 0.2667 | 0.2807 | 0.2735 | 0.8383 |
| 0.0155 | 127.0 | 3048 | 0.7532 | 0.2289 | 0.3333 | 0.2714 | 0.8629 |
| 0.0091 | 128.0 | 3072 | 0.7973 | 0.2368 | 0.3158 | 0.2707 | 0.8524 |
| 0.0029 | 129.0 | 3096 | 0.8988 | 0.25 | 0.3684 | 0.2979 | 0.8621 |
| 0.0054 | 130.0 | 3120 | 0.9882 | 0.2299 | 0.3509 | 0.2778 | 0.8362 |
| 0.0037 | 131.0 | 3144 | 1.0792 | 0.2093 | 0.3158 | 0.2517 | 0.8468 |
| 0.0012 | 132.0 | 3168 | 0.9729 | 0.2632 | 0.3509 | 0.3008 | 0.8427 |
| 0.0009 | 133.0 | 3192 | 0.9521 | 0.2043 | 0.3333 | 0.2533 | 0.8416 |
| 0.0011 | 134.0 | 3216 | 0.9539 | 0.1978 | 0.3158 | 0.2432 | 0.8401 |
| 0.0006 | 135.0 | 3240 | 0.9692 | 0.2754 | 0.3333 | 0.3016 | 0.8504 |
| 0.0007 | 136.0 | 3264 | 0.9811 | 0.2603 | 0.3333 | 0.2923 | 0.8526 |
| 0.0007 | 137.0 | 3288 | 0.9732 | 0.25 | 0.3333 | 0.2857 | 0.8444 |
| 0.0004 | 138.0 | 3312 | 0.9955 | 0.2278 | 0.3158 | 0.2647 | 0.8373 |
| 0.0005 | 139.0 | 3336 | 0.9939 | 0.2466 | 0.3158 | 0.2769 | 0.8389 |
| 0.001 | 140.0 | 3360 | 1.0081 | 0.2432 | 0.3158 | 0.2748 | 0.8377 |
| 0.0006 | 141.0 | 3384 | 1.0216 | 0.2308 | 0.3158 | 0.2667 | 0.8404 |
| 0.0005 | 142.0 | 3408 | 1.0364 | 0.25 | 0.3158 | 0.2791 | 0.8332 |
| 0.0004 | 143.0 | 3432 | 1.0185 | 0.2571 | 0.3158 | 0.2835 | 0.8426 |
| 0.0006 | 144.0 | 3456 | 1.0168 | 0.2603 | 0.3333 | 0.2923 | 0.8458 |
| 0.0005 | 145.0 | 3480 | 1.0079 | 0.2754 | 0.3333 | 0.3016 | 0.8476 |
| 0.0006 | 146.0 | 3504 | 1.0080 | 0.25 | 0.3333 | 0.2857 | 0.8438 |
| 0.0004 | 147.0 | 3528 | 1.0194 | 0.2346 | 0.3333 | 0.2754 | 0.8396 |
| 0.0004 | 148.0 | 3552 | 1.0299 | 0.2262 | 0.3333 | 0.2695 | 0.8373 |
| 0.0005 | 149.0 | 3576 | 1.0331 | 0.2289 | 0.3333 | 0.2714 | 0.8387 |
| 0.0004 | 150.0 | 3600 | 1.0294 | 0.2436 | 0.3333 | 0.2815 | 0.8412 |
| 0.0004 | 151.0 | 3624 | 1.0366 | 0.2405 | 0.3333 | 0.2794 | 0.8410 |
| 0.0004 | 152.0 | 3648 | 1.0533 | 0.2468 | 0.3333 | 0.2836 | 0.8448 |
| 0.0005 | 153.0 | 3672 | 1.0379 | 0.2879 | 0.3333 | 0.3089 | 0.8458 |
| 0.0005 | 154.0 | 3696 | 1.0395 | 0.2836 | 0.3333 | 0.3065 | 0.8454 |
| 0.0004 | 155.0 | 3720 | 1.0438 | 0.2836 | 0.3333 | 0.3065 | 0.8453 |
| 0.0004 | 156.0 | 3744 | 1.0475 | 0.2879 | 0.3333 | 0.3089 | 0.8453 |
| 0.0004 | 157.0 | 3768 | 1.0558 | 0.2794 | 0.3333 | 0.304 | 0.8450 |
| 0.0004 | 158.0 | 3792 | 1.0596 | 0.2754 | 0.3333 | 0.3016 | 0.8444 |
| 0.0004 | 159.0 | 3816 | 1.0633 | 0.2836 | 0.3333 | 0.3065 | 0.8445 |
| 0.0004 | 160.0 | 3840 | 1.0653 | 0.2836 | 0.3333 | 0.3065 | 0.8445 |
| 0.0004 | 161.0 | 3864 | 1.0687 | 0.2754 | 0.3333 | 0.3016 | 0.8446 |
| 0.0004 | 162.0 | 3888 | 1.0732 | 0.2714 | 0.3333 | 0.2992 | 0.8448 |
| 0.0005 | 163.0 | 3912 | 1.0729 | 0.2568 | 0.3333 | 0.2901 | 0.8444 |
| 0.0004 | 164.0 | 3936 | 1.0764 | 0.2533 | 0.3333 | 0.2879 | 0.8436 |
| 0.0005 | 165.0 | 3960 | 1.0737 | 0.2794 | 0.3333 | 0.304 | 0.8465 |
| 0.0005 | 166.0 | 3984 | 1.0700 | 0.2754 | 0.3333 | 0.3016 | 0.8482 |
| 0.0004 | 167.0 | 4008 | 1.0679 | 0.2794 | 0.3333 | 0.304 | 0.8496 |
| 0.0005 | 168.0 | 4032 | 1.0695 | 0.2676 | 0.3333 | 0.2969 | 0.8498 |
| 0.0004 | 169.0 | 4056 | 1.0704 | 0.2714 | 0.3333 | 0.2992 | 0.8498 |
| 0.0005 | 170.0 | 4080 | 1.0716 | 0.2794 | 0.3333 | 0.304 | 0.8495 |
| 0.0004 | 171.0 | 4104 | 1.0702 | 0.2639 | 0.3333 | 0.2946 | 0.8498 |
| 0.0005 | 172.0 | 4128 | 1.0713 | 0.25 | 0.3333 | 0.2857 | 0.8491 |
| 0.0004 | 173.0 | 4152 | 1.0736 | 0.2436 | 0.3333 | 0.2815 | 0.8491 |
| 0.0005 | 174.0 | 4176 | 1.0808 | 0.2568 | 0.3333 | 0.2901 | 0.8486 |
| 0.0004 | 175.0 | 4200 | 1.0867 | 0.2639 | 0.3333 | 0.2946 | 0.8486 |
| 0.0004 | 176.0 | 4224 | 1.0899 | 0.2754 | 0.3333 | 0.3016 | 0.8486 |
| 0.0004 | 177.0 | 4248 | 1.0900 | 0.2603 | 0.3333 | 0.2923 | 0.8486 |
| 0.0005 | 178.0 | 4272 | 1.0871 | 0.2754 | 0.3333 | 0.3016 | 0.8489 |
| 0.0004 | 179.0 | 4296 | 1.0863 | 0.2794 | 0.3333 | 0.304 | 0.8492 |
| 0.0004 | 180.0 | 4320 | 1.0892 | 0.2754 | 0.3333 | 0.3016 | 0.8493 |
| 0.0004 | 181.0 | 4344 | 1.0919 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 182.0 | 4368 | 1.0933 | 0.2639 | 0.3333 | 0.2946 | 0.8490 |
| 0.0004 | 183.0 | 4392 | 1.0949 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 184.0 | 4416 | 1.0953 | 0.2639 | 0.3333 | 0.2946 | 0.8489 |
| 0.0004 | 185.0 | 4440 | 1.1031 | 0.2714 | 0.3333 | 0.2992 | 0.8496 |
| 0.0004 | 186.0 | 4464 | 1.1049 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 187.0 | 4488 | 1.1082 | 0.2676 | 0.3333 | 0.2969 | 0.8495 |
| 0.0004 | 188.0 | 4512 | 1.1091 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 189.0 | 4536 | 1.1109 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 190.0 | 4560 | 1.1119 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0004 | 191.0 | 4584 | 1.1129 | 0.2603 | 0.3333 | 0.2923 | 0.8494 |
| 0.0004 | 192.0 | 4608 | 1.1139 | 0.2639 | 0.3333 | 0.2946 | 0.8494 |
| 0.0005 | 193.0 | 4632 | 1.1051 | 0.2676 | 0.3333 | 0.2969 | 0.8497 |
| 0.0004 | 194.0 | 4656 | 1.1037 | 0.2639 | 0.3333 | 0.2946 | 0.8495 |
| 0.0004 | 195.0 | 4680 | 1.1045 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 196.0 | 4704 | 1.1052 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 197.0 | 4728 | 1.1057 | 0.2568 | 0.3333 | 0.2901 | 0.8496 |
| 0.0004 | 198.0 | 4752 | 1.1057 | 0.2533 | 0.3333 | 0.2879 | 0.8497 |
| 0.0004 | 199.0 | 4776 | 1.1061 | 0.2533 | 0.3333 | 0.2879 | 0.8497 |
| 0.0004 | 200.0 | 4800 | 1.1061 | 0.2533 | 0.3333 | 0.2879 | 0.8498 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
ahishamm/vit-huge-HAM-10000-patch-14
|
ahishamm
| 2023-06-27T17:14:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T16:00:12Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-HAM-10000-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-HAM-10000-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/HAM_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3807
- Accuracy: 0.8653
- Recall: 0.8653
- F1: 0.8653
- Precision: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7133 | 0.2 | 100 | 0.7307 | 0.7551 | 0.7551 | 0.7551 | 0.7551 |
| 0.7015 | 0.4 | 200 | 0.6770 | 0.7546 | 0.7546 | 0.7546 | 0.7546 |
| 0.5847 | 0.6 | 300 | 0.6005 | 0.7890 | 0.7890 | 0.7890 | 0.7890 |
| 0.6016 | 0.8 | 400 | 0.5909 | 0.7810 | 0.7810 | 0.7810 | 0.7810 |
| 0.585 | 1.0 | 500 | 0.4994 | 0.8175 | 0.8175 | 0.8175 | 0.8175 |
| 0.3114 | 1.2 | 600 | 0.4799 | 0.8354 | 0.8354 | 0.8354 | 0.8354 |
| 0.2868 | 1.4 | 700 | 0.5035 | 0.8140 | 0.8140 | 0.8140 | 0.8140 |
| 0.3178 | 1.6 | 800 | 0.4345 | 0.8544 | 0.8544 | 0.8544 | 0.8544 |
| 0.344 | 1.8 | 900 | 0.4539 | 0.8374 | 0.8374 | 0.8374 | 0.8374 |
| 0.3273 | 2.0 | 1000 | 0.3807 | 0.8653 | 0.8653 | 0.8653 | 0.8653 |
| 0.0903 | 2.2 | 1100 | 0.4843 | 0.8574 | 0.8574 | 0.8574 | 0.8574 |
| 0.1105 | 2.4 | 1200 | 0.4116 | 0.8788 | 0.8788 | 0.8788 | 0.8788 |
| 0.1551 | 2.59 | 1300 | 0.4446 | 0.8534 | 0.8534 | 0.8534 | 0.8534 |
| 0.0804 | 2.79 | 1400 | 0.4129 | 0.8778 | 0.8778 | 0.8778 | 0.8778 |
| 0.0811 | 2.99 | 1500 | 0.4459 | 0.8738 | 0.8738 | 0.8738 | 0.8738 |
| 0.0391 | 3.19 | 1600 | 0.4409 | 0.8878 | 0.8878 | 0.8878 | 0.8878 |
| 0.0075 | 3.39 | 1700 | 0.4671 | 0.8888 | 0.8888 | 0.8888 | 0.8888 |
| 0.0113 | 3.59 | 1800 | 0.4591 | 0.8788 | 0.8788 | 0.8788 | 0.8788 |
| 0.0079 | 3.79 | 1900 | 0.4695 | 0.8858 | 0.8858 | 0.8858 | 0.8858 |
| 0.021 | 3.99 | 2000 | 0.4705 | 0.8893 | 0.8893 | 0.8893 | 0.8893 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Bodolaz/Unit-4.2-final
|
Bodolaz
| 2023-06-27T17:09:51Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T17:09:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Unit-4.2-final
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 9.71 +/- 10.79
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kojitakahiro/dar
|
kojitakahiro
| 2023-06-27T17:05:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T17:03:16Z |
---
license: creativeml-openrail-m
---
|
JaakeB/ppo-Huggy
|
JaakeB
| 2023-06-27T17:00:25Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-27T17:00:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: JaakeB/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pcuenq/falcon-7b-instruct
|
pcuenq
| 2023-06-27T16:58:28Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-19T11:34:20Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
license: apache-2.0
duplicated_from: tiiuae/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae
|
Miholini/turkishReviews-ds-mini
|
Miholini
| 2023-06-27T16:46:56Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T16:45:00Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Miholini/turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Miholini/turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.4662
- Validation Loss: 8.2837
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -887, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2944 | 9.6963 | 0 |
| 9.3022 | 8.9384 | 1 |
| 8.4662 | 8.2837 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
numanBot/summary_annotation_score
|
numanBot
| 2023-06-27T16:45:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T16:32:58Z |
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
model = TFAutoModelForSequenceClassification.from_pretrained("numanBot/summary_annotation_score", num_labels=1)
|
jdawnduan/q-Taxi-v3-test
|
jdawnduan
| 2023-06-27T16:44:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T16:44:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jdawnduan/q-Taxi-v3-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maidh/ppo-LunarLander-v2-unit8-v1
|
maidh
| 2023-06-27T16:41:53Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T16:40:37Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 21.08 +/- 78.81
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 2000000
'learning_rate': 0.0001
'num_envs': 4
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'WilliamADSP/ppo-LunarLander-v2-unit8-v1'
'batch_size': 2048
'minibatch_size': 512}
```
|
jdawnduan/q-FrozenLake-v1-4x4-noSlippery
|
jdawnduan
| 2023-06-27T16:40:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T16:40:26Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jdawnduan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maidh/YOUR_REPO_ID
|
maidh
| 2023-06-27T16:34:59Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T15:44:43Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.73 +/- 5.48
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r WilliamADSP/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
slone/fastText-LID-323
|
slone
| 2023-06-27T16:28:03Z | 4 | 9 |
fasttext
|
[
"fasttext",
"text-classification",
"language-identification",
"arxiv:2209.09368",
"region:us"
] |
text-classification
| 2022-09-15T06:44:18Z |
---
library_name: fasttext
tags:
- text-classification
- language-identification
---
This is a fastText-based language classification model from the paper [The first neural machine translation system for the Erzya language](https://arxiv.org/abs/2209.09368).
It supports 323 languages used in Wikipedia (as of July 2022), and has extended support of the Erzya (`myv`) and Moksha (`mdf`) languages.
Example usage:
```Python
import fasttext
import urllib.request
import os
model_path = 'lid.323.ftz'
url = 'https://huggingface.co/slone/fastText-LID-323/resolve/main/lid.323.ftz'
if not os.path.exists(model_path):
urllib.request.urlretrieve(url, model_path) # or just download it manually
model = fasttext.load_model(model_path)
languages, scores = model.predict("эрзянь кель", k=3) # k is the number of returned hypotheses
```
The model was trained on texts of articles randomly sampled from Wikipedia. It works better with sentences and longer texts than with words, and may be sensitive to noise.
|
kesslya1/F1_cars
|
kesslya1
| 2023-06-27T16:18:10Z | 7 | 0 |
keras
|
[
"keras",
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-16T07:08:00Z |
---
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
---
|
chunwoolee0/xlm-roberta-base-finetuned-panx-de
|
chunwoolee0
| 2023-06-27T16:09:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-27T15:29:35Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8653353814644136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2583 | 1.0 | 525 | 0.1596 | 0.8231 |
| 0.1262 | 2.0 | 1050 | 0.1395 | 0.8468 |
| 0.0824 | 3.0 | 1575 | 0.1339 | 0.8653 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
gongliyu/fine-tuned-t5-small
|
gongliyu
| 2023-06-27T15:44:16Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T19:00:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: fine-tuned-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5422
- Precision: nan
- Recall: 0.7117
- F1: 0.5635
- Hashcode: roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2)
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Hashcode | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:------------------------------------------------------:|:-------:|
| No log | 1.0 | 1 | 12.9679 | 0.7745 | 0.7227 | 0.7474 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 2.0 | 2 | 12.1426 | 0.7811 | 0.7221 | 0.7503 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 3.0 | 3 | 11.2809 | 0.7811 | 0.7221 | 0.7503 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 4.0 | 4 | 10.4669 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 5.0 | 5 | 9.7061 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 6.0 | 6 | 9.0054 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 7.0 | 7 | 8.3875 | 0.7821 | 0.7273 | 0.7536 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 8.0 | 8 | 7.8287 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 9.0 | 9 | 7.3385 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 10.0 | 10 | 6.9141 | 0.7772 | 0.7278 | 0.7515 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 11.0 | 11 | 6.5516 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 12.0 | 12 | 6.2399 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 13.0 | 13 | 5.9851 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 14.0 | 14 | 5.7744 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 15.0 | 15 | 5.5976 | 0.7801 | 0.7240 | 0.7509 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 16.0 | 16 | 5.4546 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 17.0 | 17 | 5.3403 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 18.0 | 18 | 5.2461 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 19.0 | 19 | 5.1688 | 0.7873 | 0.7158 | 0.7497 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 20.0 | 20 | 5.1052 | 0.7922 | 0.7169 | 0.7525 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 21.0 | 21 | 5.0489 | 0.7922 | 0.7169 | 0.7525 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 22.0 | 22 | 5.0025 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 23.0 | 23 | 4.9621 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 24.0 | 24 | 4.9263 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 25.0 | 25 | 4.8933 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 26.0 | 26 | 4.8623 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 27.0 | 27 | 4.8327 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 28.0 | 28 | 4.8060 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 29.0 | 29 | 4.7811 | 0.7941 | 0.7122 | 0.7508 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 30.0 | 30 | 4.7583 | 0.7712 | 0.7105 | 0.7392 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 31.0 | 31 | 4.7361 | 0.7712 | 0.7105 | 0.7392 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 32.0 | 32 | 4.7152 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 33.0 | 33 | 4.6964 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 34.0 | 34 | 4.6789 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 35.0 | 35 | 4.6627 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 36.0 | 36 | 4.6475 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 37.0 | 37 | 4.6330 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 38.0 | 38 | 4.6192 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 39.0 | 39 | 4.6066 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 40.0 | 40 | 4.5957 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 41.0 | 41 | 4.5859 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 42.0 | 42 | 4.5771 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 43.0 | 43 | 4.5693 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 44.0 | 44 | 4.5625 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 45.0 | 45 | 4.5567 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 46.0 | 46 | 4.5518 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 47.0 | 47 | 4.5480 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 48.0 | 48 | 4.5451 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 49.0 | 49 | 4.5432 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
| No log | 50.0 | 50 | 4.5422 | nan | 0.7117 | 0.5635 | roberta-large_L17_idf_version=0.3.12(hug_trans=4.30.2) | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
getrajeev03/distilbart-cnn-12-6-samsum
|
getrajeev03
| 2023-06-27T15:39:58Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T14:23:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-samsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: test
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 39.3733
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-samsum
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4753
- Rouge1: 39.3733
- Rouge2: 19.4821
- Rougel: 29.8944
- Rougelsum: 36.7688
- Gen Len: 59.4750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4685 | 1.0 | 14732 | 1.4753 | 39.3733 | 19.4821 | 29.8944 | 36.7688 | 59.4750 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.12.1
- Datasets 2.13.1
- Tokenizers 0.11.0
|
AmedeoBiolatti/dqn-SpaceInvadersNoFrameskip-v4
|
AmedeoBiolatti
| 2023-06-27T15:39:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T15:39:05Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 456.50 +/- 199.66
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmedeoBiolatti -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AmedeoBiolatti -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AmedeoBiolatti
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
breadlicker45/dough-base-001
|
breadlicker45
| 2023-06-27T15:36:43Z | 1,626 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:breadlicker45/bread-qa",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T15:09:15Z |
---
datasets:
- breadlicker45/bread-qa
---
|
mnicamartins8/bert-base-uncased-with-misspelling-expansion-correction
|
mnicamartins8
| 2023-06-27T15:34:24Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T15:27:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-misspelling-expansion-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-misspelling-expansion-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2229
- Accuracy: 0.9083
- Precision: 0.9132
- Recall: 0.9083
- F1: 0.9100
- Balanced Acc: 0.8893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
maidh/ppo-LunarLander-v2
|
maidh
| 2023-06-27T15:32:32Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-04-20T10:05:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.58 +/- 12.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Allenpai/llm
|
Allenpai
| 2023-06-27T15:21:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T13:06:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
uf-aice-lab/git_20
|
uf-aice-lab
| 2023-06-27T14:58:46Z | 98 | 1 |
transformers
|
[
"transformers",
"pytorch",
"git",
"image-text-to-text",
"image-to-text",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-06-27T14:50:51Z |
---
license: mit
language:
- en
pipeline_tag: image-to-text
---
# git_20
<!-- Provide a quick summary of what the model is/does. -->
This model is fine-tuned with Microsoft GIT with 1 Nvidia A100-80G GPU. We extracted 100,000 student assignments containing teacher feedback from 3 million student assignments as training data. The training data is divided into the image part of student assignments and the text part of teacher feedback. git_20 consists of 18 layers and over 170 million parameters, consuming up to 0.7 gigabytes of disk space. The project aims to use multi-modal and multi-task deep learning models to create a machine learning pipeline that provides automatic diagnostic feedback for students' mathematical reasoning. Researchers can experiment with and finetune the model to help construct multimodel that can effectively provide automatic diagnostic feedback for students' mathematical reasoning.
### Here is how to use it with texts in HuggingFace
```python
from transformers import AutoModelForCausalLM
from transformers import AutoProcessor
from PIL import Image
model = AutoModelForCausalLM.from_pretrained("Fan21/git_20")
processor = AutoProcessor.from_pretrained("Fan21/git_20")
image_path ='Please enter the image address here'
image = Image.open(image_path)
width, height = image.size
display(image.resize((int(1 * width), int(1 * height))))
pixel_values = processor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
outputs = model.generate(pixel_values=pixel_values, max_length=50)
answer = tokenizer.decode(outputs[0], skip_special_tokens=True)
```
|
mnicamartins8/bert-base-uncased-with-expansion-misspellings-correction
|
mnicamartins8
| 2023-06-27T14:58:43Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T14:50:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-expansion-misspellings-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-expansion-misspellings-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2494
- Accuracy: 0.9063
- Precision: 0.9113
- Recall: 0.9063
- F1: 0.9081
- Balanced Acc: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
jialii/falcon-7b-instruct
|
jialii
| 2023-06-27T14:54:26Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"coreml",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T14:54:26Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: true
widget:
- text: Hey Falcon! Any recommendations for my holidays in Abu Dhabi?
example_title: Abu Dhabi Trip
- text: What's the Everett interpretation of quantum mechanics?
example_title: 'Q/A: Quantum & Answers'
- text: >-
Give me a list of the top 10 dive sites you would recommend around the
world.
example_title: Diving Top 10
- text: Can you tell me more about deep-water soloing?
example_title: Extreme sports
- text: >-
Can you write a short tweet about the Apache 2.0 release of our latest AI
model, Falcon LLM?
example_title: Twitter Helper
- text: What are the responsabilities of a Chief Llama Officer?
example_title: Trendy Jobs
license: apache-2.0
duplicated_from: tiiuae/falcon-7b-instruct
---
# ✨ Falcon-7B-Instruct
**Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) and finetuned on a mixture of chat/instruct datasets. It is made available under the Apache 2.0 license.**
*Paper coming soon 😊.*
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B-Instruct?
* **You are looking for a ready-to-use chat/instruct model based on [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).**
* **Falcon-7B is a strong base model, outperforming comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
💬 **This is an instruct model, which may not be ideal for further finetuning.** If you are interested in building your own instruct/chat model, we recommend starting from [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
🔥 **Looking for an even more powerful model?** [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct) is Falcon-7B-Instruct's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B-Instruct.
# Model Card for Falcon-7B-Instruct
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0;
- **Finetuned from model:** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Falcon-7B-Instruct has been finetuned on a mixture of instruct and chat datasets.
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B-Instruct is mostly trained on English data, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B-Instruct to develop guardrails and to take appropriate precautions for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B-Instruct was finetuned on a 250M tokens mixture of instruct/chat datasets.
| **Data source** | **Fraction** | **Tokens** | **Description** |
|--------------------|--------------|------------|-----------------------------------|
| [Bai ze](https://github.com/project-baize/baize-chatbot) | 65% | 164M | chat |
| [GPT4All](https://github.com/nomic-ai/gpt4all) | 25% | 62M | instruct |
| [GPTeacher](https://github.com/teknium1/GPTeacher) | 5% | 11M | instruct |
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 5% | 13M | massive web crawl |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
## Evaluation
*Paper coming soon.*
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
Note that this model variant is not optimized for NLP benchmarks.
## Technical Specifications
For more information about pretraining, see [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b).
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B-Instruct was trained on AWS SageMaker, on 32 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B-Instruct was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B-Instruct is made available under the Apache 2.0 license.
## Contact
falconllm@tii.ae
|
nolanaatama/vllgrfrmmncrftrvcv2500pchnlgspdrwb
|
nolanaatama
| 2023-06-27T14:52:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T14:38:47Z |
---
license: creativeml-openrail-m
---
|
Ellbendls/ppo-Pyramid
|
Ellbendls
| 2023-06-27T14:42:01Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-27T14:40:32Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Ellbendls/ppo-Pyramid
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Mel-Iza0/RedPajama-ZeroShot-Type-B
|
Mel-Iza0
| 2023-06-27T14:25:20Z | 0 | 0 | null |
[
"pt",
"region:us"
] | null | 2023-06-23T17:11:16Z |
---
language:
- pt
---
Finetuned Model with 100K data in brazilian portuguese in format: <br>
```
Classifique o texto entre 'class1', 'class2', 'class3', 'class4', 'class5':
Texto: text
Classe: text class
```
Example:<br>
```
Classifique o texto entre 'geodinâmica', 'empatia', 'certificação isso', 'etimologia', 'viagem':
\n\nTexto: viajar é uma forma de celebrarmos a diversidade
\n\nClasse: viagem
```
---
|
aksj/falcon-finetuned-medQA-lora
|
aksj
| 2023-06-27T14:25:04Z | 4 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T14:16:06Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
librarian-bots/BERTopic_model_card_bias
|
librarian-bots
| 2023-06-27T14:21:46Z | 24 | 3 |
bertopic
|
[
"bertopic",
"metadata",
"model cards",
"bias",
"text-classification",
"en",
"dataset:davanstrien/model_cards_with_readmes",
"license:mit",
"region:us"
] |
text-classification
| 2023-05-11T10:31:44Z |
---
tags:
- bertopic
- metadata
- model cards
- bias
library_name: bertopic
datasets:
- davanstrien/model_cards_with_readmes
language:
- en
license: mit
pipeline_tag: text-classification
inference: false
---
# BERTopic model card bias topic model
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("davanstrien/BERTopic_model_card_bias")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 11
* Number of training documents: 1271
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | evaluation - claim - reasoning - parameters - university | 13 | -1_evaluation_claim_reasoning_parameters |
| 0 | checkpoint - fairly - characterized - even - sectionhttpshuggingfacecobertbaseuncased | 13 | 0_checkpoint_fairly_characterized_even |
| 1 | generative - research - uses - processes - artistic | 137 | 1_generative_research_uses_processes |
| 2 | checkpoint - try - snippet - sectionhttpshuggingfacecobertbaseuncased - limitation | 48 | 2_checkpoint_try_snippet_sectionhttpshuggingfacecobertbaseuncased |
| 3 | meant - technical - sociotechnical - convey - needed | 32 | 3_meant_technical_sociotechnical_convey |
| 4 | gpt2 - team - their - cardhttpsgithubcomopenaigpt2blobmastermodelcardmd - worked | 32 | 4_gpt2_team_their_cardhttpsgithubcomopenaigpt2blobmastermodelcardmd |
| 5 | datasets - internet - unfiltered - therefore - lot | 27 | 5_datasets_internet_unfiltered_therefore |
| 6 | dacy - danish - pipelines - transformer - bert | 25 | 6_dacy_danish_pipelines_transformer |
| 7 | your - pythia - branch - checkpoints - provide | 20 | 7_your_pythia_branch_checkpoints |
| 8 | opt - trained - large - software - code | 15 | 8_opt_trained_large_software |
| 9 | al - et - identity - occupational - groups | 15 | 9_al_et_identity_occupational |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.29.0
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.11
|
MainaMan/ppo-SnowballTarget
|
MainaMan
| 2023-06-27T14:20:46Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-27T14:11:11Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MainaMan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MariaK/whisper-tiny-minds-v5-numproc1
|
MariaK
| 2023-06-27T14:17:11Z | 93 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T13:53:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v5-numproc1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train[451:]
args: en-US
metrics:
- name: Wer
type: wer
value: 0.37507453786523554
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v5-numproc1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Wer Ortho: 0.4102
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.4354 | 3.57 | 100 | 0.5542 | 0.4539 | 0.3870 |
| 0.066 | 7.14 | 200 | 0.5501 | 0.4059 | 0.3554 |
| 0.0086 | 10.71 | 300 | 0.6204 | 0.3953 | 0.3542 |
| 0.0028 | 14.29 | 400 | 0.6455 | 0.3990 | 0.3631 |
| 0.0022 | 17.86 | 500 | 0.6530 | 0.4102 | 0.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
psxjp5/bart-large-xsum
|
psxjp5
| 2023-06-27T13:46:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T13:06:27Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-xsum-finetuned-natural-questions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-xsum-finetuned-natural-questions
This model is a fine-tuned version of [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2729
- Rouge1: 19.7211
- Rouge2: 17.4272
- Rougel: 19.0681
- Rougelsum: 19.3677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 0.99 | 34 | 0.2562 | 17.9806 | 15.2059 | 16.807 | 17.5533 |
| No log | 1.99 | 68 | 0.1845 | 14.6261 | 10.494 | 13.0132 | 13.8392 |
| No log | 2.98 | 102 | 0.2171 | 17.3737 | 14.7893 | 16.5485 | 16.8383 |
| No log | 4.0 | 137 | 0.3474 | 17.6187 | 14.727 | 16.5614 | 17.1476 |
| No log | 4.99 | 171 | 0.3462 | 17.7103 | 15.1403 | 16.9424 | 17.3123 |
| 0.1255 | 5.99 | 205 | 0.3355 | 19.2782 | 16.5525 | 18.4283 | 18.8422 |
| 0.1255 | 6.98 | 239 | 0.2281 | 19.8816 | 17.4387 | 19.238 | 19.552 |
| 0.1255 | 7.94 | 272 | 0.2729 | 19.7211 | 17.4272 | 19.0681 | 19.3677 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mfaiq2307/whisper-large-cahya-peft
|
mfaiq2307
| 2023-06-27T13:25:28Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"automatic-speech-recognition",
"license:other",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-24T12:36:44Z |
---
license: other
library_name: transformers
pipeline_tag: automatic-speech-recognition
---
This is a model for Indonesia audio recognition using LoRA on Whisper-large-v2.
|
Jumtra/rinna-v1-tune-ep1
|
Jumtra
| 2023-06-27T13:23:03Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ja",
"lm",
"nlp",
"dataset:kunishou/databricks-dolly-15k-ja",
"dataset:kunishou/hh-rlhf-49k-ja",
"dataset:Jumtra/oasst1_ja",
"dataset:Jumtra/jglue_jnli",
"dataset:Jumtra/jglue_jsquad",
"dataset:Jumtra/jglue_jsquads_with_input",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T12:20:42Z |
---
license: mit
tags:
- ja
- gpt_neox
- text-generation
- lm
- nlp
datasets:
- kunishou/databricks-dolly-15k-ja
- kunishou/hh-rlhf-49k-ja
- Jumtra/oasst1_ja
- Jumtra/jglue_jnli
- Jumtra/jglue_jsquad
- Jumtra/jglue_jsquads_with_input
inference: false
language:
- ja
---
# rinna-3.6b
このモデルは、MosaicMLのllm-foundryリポジトリを使用して[Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)をファインチューニングしたモデルです。
## Model Date
June 28, 2023
## Model License
MIT
## 評価
[Jumtra/test_data_100QA](https://huggingface.co/datasets/Jumtra/test_data_100QA)を用いてモデルの正答率を評価した
また、学習時のvalidateデータに対してのPerplexityを記載した。
| model name | 正答率 | Perplexity |
| ---- | ---- | ---- |
| [Jumtra/rinna-3.6b-tune-ep5](https://huggingface.co/Jumtra/rinna-3.6b-tune-ep5)| 40/100 | 8.105 |
| [Jumtra/rinna-v1-tune-ep1](https://huggingface.co/Jumtra/rinna-v1-tune-ep1) | 42/100 | 7.458 |
| [Jumtra/rinna-v1-tune-ep3](https://huggingface.co/Jumtra/rinna-v1-tune-ep3) | 41/100 | 7.034 |
| [Jumtra/calm-7b-tune-ep4](https://huggingface.co/Jumtra/calm-7b-tune-ep4) | 40/100 | 9.766 |
| [Jumtra/calm-v3-ep1](https://huggingface.co/Jumtra/calm-v3-ep1) | 35/100 | 9.305 |
| [Jumtra/calm-v3-ep3](https://huggingface.co/Jumtra/calm-v3-ep3) | 37/100 | 13.276 |
以下のプロンプトを用いた
```python
INSTRUCTION_KEY = "### 入力:"
RESPONSE_KEY = "### 回答:"
INTRO_BLURB = "以下はタスクを説明する指示と文脈のある文章が含まれた入力です。要求を適切に満たす回答を生成しなさい。"
JP_PROMPT_FOR_GENERATION_FORMAT = """{intro}
{instruction_key}
{instruction}
{response_key}
""".format(
intro=INTRO_BLURB,
instruction_key=INSTRUCTION_KEY,
instruction="{instruction}",
response_key=RESPONSE_KEY,
)
```
|
swl-models/CoffeescentAnime-v1.0
|
swl-models
| 2023-06-27T13:14:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:13:02Z |
---
license: creativeml-openrail-m
---
|
machinelearnear/falcon-7b-alpaca-lora-ca
|
machinelearnear
| 2023-06-27T13:13:20Z | 3 | 1 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-23T16:53:17Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
Mahmoud22/Reinforce-cartpole
|
Mahmoud22
| 2023-06-27T13:12:09Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T13:11:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
swl-models/EarlySpring-v1.0
|
swl-models
| 2023-06-27T13:12:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:09:27Z |
---
license: creativeml-openrail-m
---
|
swl-models/LemonadeAnime-v1.0
|
swl-models
| 2023-06-27T13:09:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T13:07:01Z |
---
license: creativeml-openrail-m
---
|
Shubham09/falcon_hcltech_p1
|
Shubham09
| 2023-06-27T13:00:47Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-27T12:52:07Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
avecoder/marian-finetuned-kde4-en-to-ru
|
avecoder
| 2023-06-27T13:00:25Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-06-27T04:40:39Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-ru
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-ru
split: train
args: en-ru
metrics:
- name: Bleu
type: bleu
value: 29.07778420930096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-ru
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3767
- Bleu: 29.0778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Holmodi/Reinforce-policy-gradient
|
Holmodi
| 2023-06-27T12:55:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T12:55:14Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-policy-gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MariaK/whisper-tiny-minds-v3
|
MariaK
| 2023-06-27T12:44:57Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-27T12:20:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-minds-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-minds-v3
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6530
- Wer Ortho: 0.4102
- Wer: 0.3751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.4354 | 3.57 | 100 | 0.5542 | 0.4539 | 0.3870 |
| 0.066 | 7.14 | 200 | 0.5501 | 0.4059 | 0.3554 |
| 0.0086 | 10.71 | 300 | 0.6204 | 0.3953 | 0.3542 |
| 0.0028 | 14.29 | 400 | 0.6455 | 0.3990 | 0.3631 |
| 0.0022 | 17.86 | 500 | 0.6530 | 0.4102 | 0.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
arkelik01/danio
|
arkelik01
| 2023-06-27T12:43:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-17T02:25:32Z |
---
license: creativeml-openrail-m
---
|
ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible
|
ZTamas
| 2023-06-27T12:34:15Z | 134 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"hu",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-09T11:14:23Z |
---
language:
- hu
pipeline_tag: question-answering
---
This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
```py
sentencepiece==0.1.97
protobuf==3.20.0
```
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
tokenizer = "ZTamas/xlm-roberta-large-squad2-qa-milqa-impossible",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 50 #This can be modified
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
DORA1222/cra-test0627
|
DORA1222
| 2023-06-27T12:31:22Z | 0 | 0 | null |
[
"ab",
"license:other",
"region:us"
] | null | 2023-06-27T12:30:02Z |
---
license: other
language:
- ab
metrics:
- accuracy
---
|
christinacdl/OLID_OFFENSIVE_BERT_MULTILINGUAL
|
christinacdl
| 2023-06-27T12:30:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:christinacdl/OLID_Offensive",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-27T12:05:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: OLID_OFFENSIVE_BERT_MULTILINGUAL
results: []
datasets:
- christinacdl/OLID_Offensive
language:
- en
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OLID_OFFENSIVE_BERT_MULTILINGUAL
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6444
- Macro F1: 0.7636
- Micro F1: 0.7927
- Accuracy: 0.7927
Performance on test set:
- Accuracy: 0.9022540551927534
- F1 score: 0.8855180494749837
- Precision: 0.8690382339788112
- Recall : 0.9122739652138543
- Matthews Correlation Coefficient: 0.7801150070033589
- Precision of each class: [0.97256778 0.76550868]
- Recall of each class: [0.88969945 0.93484848]
- F1 score of each class: [0.92928985 0.84174625]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Micro F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:--------:|
| 0.5568 | 1.0 | 744 | 0.4563 | 0.7641 | 0.7973 | 0.7973 |
| 0.4507 | 2.0 | 1488 | 0.4442 | 0.7657 | 0.8041 | 0.8041 |
| 0.3033 | 3.0 | 2232 | 0.5168 | 0.7672 | 0.7927 | 0.7927 |
| 0.2661 | 4.0 | 2976 | 0.6444 | 0.7636 | 0.7927 | 0.7927 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ZTamas/xlm-roberta-large-squad2_impossible_long_answer
|
ZTamas
| 2023-06-27T12:30:19Z | 11 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"hu",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-02-09T12:10:19Z |
---
language:
- hu
pipeline_tag: question-answering
---
This model is a fine-tuned version of deepset/xlm-roberta-large-squad2 on the milqa dataset.
Packages to install for large roberta model:
```py
sentencepiece==0.1.97
protobuf==3.20.0
```
How to use:
```py
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
tokenizer = "ZTamas/xlm-roberta-large-squad2_impossible_long_answer",
device = 0, #GPU selection, -1 on CPU
handle_impossible_answer = True,
max_answer_len = 1000 #This can be modified, but to let the model's
#answer be as long as it wants so I
#decided to add a big number
)
predictions = qa_pipeline({
'context': context,
'question': question
})
print(predictions)
```
|
lrei/roberta-large-emolit
|
lrei
| 2023-06-27T12:28:44Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"doi:10.57967/hf/0849",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-25T12:43:42Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-classification
widget:
- text: "Thank you for your help, brave traveler."
- text: "There is no creature loves me; And if I die no soul will pity me."
- text: "We men are wretched things."
---
## Description
Literature sentences from [Project Gutenberg](https://www.gutenberg.org/). 38 emotion labels (+neutral examples). Semi-Supervised dataset.
## Article
[Detecting Fine-Grained Emotions in Literature](https://www.mdpi.com/2076-3417/13/13/7502)
Please cite:
```plain text
@Article{app13137502,
AUTHOR = {Rei, Luis and Mladenić, Dunja},
TITLE = {Detecting Fine-Grained Emotions in Literature},
JOURNAL = {Applied Sciences},
VOLUME = {13},
YEAR = {2023},
NUMBER = {13},
ARTICLE-NUMBER = {7502},
URL = {https://www.mdpi.com/2076-3417/13/13/7502},
ISSN = {2076-3417},
DOI = {10.3390/app13137502}
}
```
## Abstract
Emotion detection in text is a fundamental aspect of affective computing and is closely linked to natural language processing. Its applications span various domains, from interactive chatbots to marketing and customer service. This research specifically focuses on its significance in literature analysis and understanding. To facilitate this, we present a novel approach that involves creating a multi-label fine-grained emotion detection dataset, derived from literary sources. Our methodology employs a simple yet effective semi-supervised technique. We leverage textual entailment classification to perform emotion-specific weak-labeling, selecting examples with the highest and lowest scores from a large corpus. Utilizing these emotion-specific datasets, we train binary pseudo-labeling classifiers for each individual emotion. By applying this process to the selected examples, we construct a multi-label dataset. Using this dataset, we train models and evaluate their performance within a traditional supervised setting. Our model achieves an F1 score of 0.59 on our labeled gold set, showcasing its ability to effectively detect fine-grained emotions. Furthermore, we conduct evaluations of the model's performance in zero- and few-shot transfer scenarios using benchmark datasets. Notably, our results indicate that the knowledge learned from our dataset exhibits transferability across diverse data domains, demonstrating its potential for broader applications beyond emotion detection in literature. Our contribution thus includes a multi-label fine-grained emotion detection dataset built from literature, the semi-supervised approach used to create it, as well as the models trained on it. This work provides a solid foundation for advancing emotion detection techniques and their utilization in various scenarios, especially within the cultural heritage analysis.
## Labels
- admiration: finds something admirable, impressive or worthy of respect
- amusement: finds something funny, entertaining or amusing
- anger: is angry, furious, or strongly displeased; displays ire, rage, or wrath
- annoyance: is annoyed or irritated
- approval: expresses a favorable opinion, approves, endorses or agrees with something or someone
- boredom: feels bored, uninterested, monotony, tedium
- calmness: is calm, serene, free from agitation or disturbance, experiences emotional tranquility
- caring: cares about the well-being of someone else, feels sympathy, compassion, affectionate concern towards someone, displays kindness or generosity
- courage: feels courage or the ability to do something that frightens one, displays fearlessness or bravery
- curiosity: is interested, curious, or has strong desire to learn something
- desire: has a desire or ambition, wants something, wishes for something to happen
- despair: feels despair, helpless, powerless, loss or absence of hope, desperation, despondency
- disappointment: feels sadness or displeasure caused by the non-fulfillment of hopes or expectations, being or let down, expresses regret due to the unfavorable outcome of a decision
- disapproval: expresses an unfavorable opinion, disagrees or disapproves of something or someone
- disgust: feels disgust, revulsion, finds something or someone unpleasant, offensive or hateful
- doubt: has doubt or is uncertain about something, bewildered, confused, or shows lack of understanding
- embarrassment: feels embarrassed, awkward, self-conscious, shame, or humiliation
- envy: is covetous, feels envy or jealousy; begrudges or resents someone for their achievements, possessions, or qualities
- excitement: feels excitement or great enthusiasm and eagerness
- faith: expresses religious faith, has a strong belief in the doctrines of a religion, or trust in god
- fear: is afraid or scared due to a threat, danger, or harm
- frustration: feels frustrated: upset or annoyed because of inability to change or achieve something
- gratitude: is thankful or grateful for something
- greed: is greedy, rapacious, avaricious, or has selfish desire to acquire or possess more than what one needs
- grief: feels grief or intense sorrow, or grieves for someone who has died
- guilt: feels guilt, remorse, or regret to have committed wrong or failed in an obligation
- indifference: is uncaring, unsympathetic, uncharitable, or callous, shows indifference, lack of concern, coldness towards someone
- joy: is happy, feels joy, great pleasure, elation, satisfaction, contentment, or delight
- love: feels love, strong affection, passion, or deep romantic attachment for someone
- nervousness: feels nervous, anxious, worried, uneasy, apprehensive, stressed, troubled or tense
- nostalgia: feels nostalgia, longing or wistful affection for the past, something lost, or for a period in one’s life, feels homesickness, a longing for one’s home, city, or country while being away; longing for a familiar place
- optimism: feels optimism or hope, is hopeful or confident about the future, that something good may happen, or the success of something
- pain: feels physical pain or is experiences physical suffering
- pride: is proud, feels pride from one’s own achievements, self-fulfillment, or from the achievements of those with whom one is closely associated, or from qualities or possessions that are widely admired
- relief: feels relaxed, relief from tension or anxiety
- sadness: feels sadness, sorrow, unhappiness, depression, dejection
- surprise: is surprised, astonished or shocked by something unexpected
- trust: trusts or has confidence in someone, or believes that someone is good, honest, or reliable
## Dataset
[EmoLit (Zenodo)](https://zenodo.org/record/7883954)
## Code
[EmoLit Train (Github)](https://github.com/lrei/emolit_train)
## Models
- [LARGE](https://huggingface.co/lrei/roberta-large-emolit)
- [BASE](https://huggingface.co/lrei/roberta-base-emolit)
- [DISTILL](https://huggingface.co/lrei/distilroberta-base-emolit)
|
brandit/fake-or-real-image-detector
|
brandit
| 2023-06-27T12:23:47Z | 0 | 2 |
diffusers
|
[
"diffusers",
"art",
"image-to-text",
"region:us"
] |
image-to-text
| 2023-06-27T12:10:23Z |
---
metrics:
- accuracy
library_name: diffusers
tags:
- art
pipeline_tag: image-to-text
---
|
maidh/Reinforce-PixelCopter5
|
maidh
| 2023-06-27T12:19:42Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T12:19:40Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kejolong/bayonetta2.0
|
kejolong
| 2023-06-27T12:10:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-27T12:06:19Z |
---
license: creativeml-openrail-m
---
|
TheYuriLover/airoboros-13b-gpt4-1.4-GPTQ-32g-ao-ts
|
TheYuriLover
| 2023-06-27T12:07:09Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-27T08:22:46Z |
This is the gptq 4bit quantization of this model: https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4
This quantization was made by using this repository: https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/triton
And I used the triton branch with all the gptq implementations available (true_sequential + act_order + groupsize 32)
|
joncam14/rl_course_vizdoom_health_gathering_supreme
|
joncam14
| 2023-06-27T11:59:16Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-27T11:44:27Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.66 +/- 6.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r joncam14/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
aidn/squadBert3Epochs
|
aidn
| 2023-06-27T11:39:42Z | 63 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-27T10:47:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aidn/squadBert3Epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aidn/squadBert3Epochs
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8730
- Validation Loss: 1.1031
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8758, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.5485 | 1.1485 | 0 |
| 0.9929 | 1.1031 | 1 |
| 0.8730 | 1.1031 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
usamakenway/pygmalion-13b-4bit-128g-AutoGPTQ
|
usamakenway
| 2023-06-27T11:35:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-06-27T11:30:56Z |
---
language: en
license: other
commercial: no
inference: false
---
# pygmalion-13b-4bit-128g
## Model description
**Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
Quantized from the decoded pygmalion-13b xor format.
**https://huggingface.co/PygmalionAI/pygmalion-13b**
In safetensor format.
### Quantization Information
GPTQ CUDA quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa
```
python llama.py --wbits 4 models/pygmalion-13b c4 --true-sequential --groupsize 128 --save_safetensors models/pygmalion-13b/4bit-128g.safetensors
```
|
Anmol0130/bottle_detection_june
|
Anmol0130
| 2023-06-27T11:25:56Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:25:49Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: bottle_detection_june
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.84375
---
# bottle_detection_june
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Dewar's_12_Years

#### Dewar's_white_lable

#### bacardi_black

#### bacardi_carta_blanca

#### bacardi_carta_negra

#### bacardi_carta_oro

#### bombay_sapphire

#### coka_cola

#### martini

|
ahishamm/vit-large-PH2-patch-32
|
ahishamm
| 2023-06-27T11:17:59Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:16:18Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-patch-32
This model is a fine-tuned version of [google/vit-large-patch32-224-in21k](https://huggingface.co/google/vit-large-patch32-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Accuracy: 0.85
- Recall: 0.85
- F1: 0.85
- Precision: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
QuangHuy54/long-t5-tglobal-base-multinews
|
QuangHuy54
| 2023-06-27T11:14:34Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:multi_news",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-27T07:01:57Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- multi_news
metrics:
- rouge
model-index:
- name: long-t5-tglobal-base-mediasum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: multi_news
type: multi_news
config: default
split: train[:20000]
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.3246
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-tglobal-base-mediasum
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-base-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary) on the multi_news dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0387
- Rouge1: 0.3246
- Rouge2: 0.0867
- Rougel: 0.1663
- Rougelsum: 0.1662
- Gen Len: 106.985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.4191 | 1.0 | 4500 | 2.0952 | 0.3389 | 0.0882 | 0.1706 | 0.1706 | 118.285 |
| 2.3462 | 2.0 | 9000 | 2.0484 | 0.3339 | 0.0887 | 0.1683 | 0.1683 | 111.936 |
| 2.3458 | 3.0 | 13500 | 2.0387 | 0.3246 | 0.0867 | 0.1663 | 0.1662 | 106.985 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-PH2-patch-16
|
ahishamm
| 2023-06-27T11:13:02Z | 200 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:11:59Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-PH2-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-patch-16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the ahishamm/ph2_vit_db dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3796
- Accuracy: 0.85
- Recall: 0.85
- F1: 0.85
- Precision: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
zzzAI19/niji-LoRA_v1.0_for_zzzmix
|
zzzAI19
| 2023-06-27T11:09:02Z | 0 | 2 | null |
[
"region:us"
] | null | 2023-06-27T10:51:46Z |
Additional learning was done with illustrations generated by niji-journey to create LoRA.
Additional training was done on a home-made merge model zzzzmix base, so please use it together.
https://huggingface.co/zzzAI19/zzzmix
The trigger word is "jis". LoRA strength of 0.7 is recommended.
I believe it is suitable for use in steps 4-7.
My recommended values are 5.5 for background oriented and 7 for character oriented.
Sample images can be found on the following pages
https://ai-drawing.net/en/2023/06/27/introduction-of-niji-lora-v1-0/
niji・journeyにより生成されたイラストで追加学習し、LoRAを作りました。
自作マージモデルzzzmixベースで追加学習したため、合わせてご利用ください。
https://huggingface.co/zzzAI19/zzzmix
トリガーワードは「jis」です。LoRA強度は0.7を推奨します。
ステップ4~7での利用に適していると考えます。
私の推奨値は、背景重視の場合は5.5、キャラクター重視の場合は7です。
サンプル画像は以下のページにあります。
https://ai-drawing.net/2023/06/27/niji-lora-v1-0%e3%81%ae%e7%b4%b9%e4%bb%8b/
---
license: creativeml-openrail-m
---
|
ahishamm/vit-large-PH2-sharpened-patch-16
|
ahishamm
| 2023-06-27T11:05:10Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T11:02:08Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-large-PH2-sharpened-patch-16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-PH2-sharpened-patch-16
This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3520
- Accuracy: 0.875
- Recall: 0.875
- F1: 0.875
- Precision: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-huge-PH2-sharpened-patch-14
|
ahishamm
| 2023-06-27T11:00:32Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T10:55:09Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-huge-PH2-sharpened-patch-14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-huge-PH2-sharpened-patch-14
This model is a fine-tuned version of [google/vit-huge-patch14-224-in21k](https://huggingface.co/google/vit-huge-patch14-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0528
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ahishamm/vit-base-PH2-sharpened-patch-32
|
ahishamm
| 2023-06-27T10:48:02Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-27T10:46:42Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: vit-base-PH2-sharpened-patch-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-PH2-sharpened-patch-32
This model is a fine-tuned version of [google/vit-base-patch32-224-in21k](https://huggingface.co/google/vit-base-patch32-224-in21k) on the ahishamm/PH2_db_sharpened dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0426
- Accuracy: 1.0
- Recall: 1.0
- F1: 1.0
- Precision: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
microsoft/swin-base-patch4-window7-224-in22k
|
microsoft
| 2023-06-27T10:46:44Z | 10,959 | 15 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"swin",
"image-classification",
"vision",
"dataset:imagenet-21k",
"arxiv:2103.14030",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-21k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Swin Transformer (large-sized model)
Swin Transformer model pre-trained on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224. It was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
Disclaimer: The team releasing Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.

[Source](https://paperswithcode.com/method/swin-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swin) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, SwinForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k")
model = SwinForImageClassification.from_pretrained("microsoft/swin-base-patch4-window7-224-in22k")
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swin.html#).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2103-14030,
author = {Ze Liu and
Yutong Lin and
Yue Cao and
Han Hu and
Yixuan Wei and
Zheng Zhang and
Stephen Lin and
Baining Guo},
title = {Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
journal = {CoRR},
volume = {abs/2103.14030},
year = {2021},
url = {https://arxiv.org/abs/2103.14030},
eprinttype = {arXiv},
eprint = {2103.14030},
timestamp = {Thu, 08 Apr 2021 07:53:26 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2103-14030.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.