modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 19:17:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kowshikBlue/dummy
|
kowshikBlue
| 2023-05-22T15:27:50Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-22T15:01:54Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 8,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Arindam1975/q-Taxi-v3
|
Arindam1975
| 2023-05-22T15:22:45Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T15:11:05Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Arindam1975/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
9wimu9/sinhala-roberta-large
|
9wimu9
| 2023-05-22T14:56:09Z | 134 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-22T14:51:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: sinhala-roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sinhala-roberta-large
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 15
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 1.0
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Manaro/q-FrozenLake-v1-4x4-Slippery
|
Manaro
| 2023-05-22T14:55:19Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T14:07:44Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.70 +/- 0.46
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Manaro/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlexC98/BertGoodCommitPreprocessed
|
AlexC98
| 2023-05-22T14:49:18Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T14:45:20Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BertGoodCommitPreprocessed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BertGoodCommitPreprocessed
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5242
- Accuracy: 0.8424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 24 | 0.5858 | 0.6788 |
| No log | 2.0 | 48 | 0.5640 | 0.7273 |
| No log | 3.0 | 72 | 0.5381 | 0.7394 |
| No log | 4.0 | 96 | 0.5246 | 0.7394 |
| No log | 5.0 | 120 | 0.5214 | 0.7394 |
| No log | 6.0 | 144 | 0.5093 | 0.7394 |
| No log | 7.0 | 168 | 0.4986 | 0.7515 |
| No log | 8.0 | 192 | 0.5131 | 0.7455 |
| No log | 9.0 | 216 | 0.5093 | 0.7697 |
| No log | 10.0 | 240 | 0.5064 | 0.7758 |
| No log | 11.0 | 264 | 0.5069 | 0.7697 |
| No log | 12.0 | 288 | 0.4774 | 0.7818 |
| No log | 13.0 | 312 | 0.5096 | 0.7879 |
| No log | 14.0 | 336 | 0.4933 | 0.7939 |
| No log | 15.0 | 360 | 0.4740 | 0.7939 |
| No log | 16.0 | 384 | 0.4787 | 0.7939 |
| No log | 17.0 | 408 | 0.4675 | 0.8 |
| No log | 18.0 | 432 | 0.4971 | 0.8121 |
| No log | 19.0 | 456 | 0.4935 | 0.8303 |
| No log | 20.0 | 480 | 0.4947 | 0.8121 |
| 0.3574 | 21.0 | 504 | 0.4968 | 0.8242 |
| 0.3574 | 22.0 | 528 | 0.5158 | 0.8303 |
| 0.3574 | 23.0 | 552 | 0.5146 | 0.8061 |
| 0.3574 | 24.0 | 576 | 0.4963 | 0.8303 |
| 0.3574 | 25.0 | 600 | 0.5024 | 0.8182 |
| 0.3574 | 26.0 | 624 | 0.5069 | 0.8242 |
| 0.3574 | 27.0 | 648 | 0.5242 | 0.8424 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VuDucQuang/nature-and-animals
|
VuDucQuang
| 2023-05-22T14:47:34Z | 36 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-22T14:34:37Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Nature-and-Animals Dreambooth model trained by VuDucQuang with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Bestie2088/munirah_v2
|
Bestie2088
| 2023-05-22T14:11:27Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-22T12:50:32Z |
---
license: bigscience-openrail-m
---
|
pysentimiento/robertuito-base-uncased
|
pysentimiento
| 2023-05-22T14:06:59Z | 981 | 11 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"twitter",
"masked-lm",
"es",
"arxiv:2111.09453",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- es
tags:
- twitter
- masked-lm
---
# robertuito-base-uncased
# RoBERTuito
## A pre-trained language model for social media text in Spanish
[**PAPER**](https://arxiv.org/abs/2111.09453)
[Github Repository](https://github.com/pysentimiento/robertuito)
[](https://colab.research.google.com/drive/1WcubR0kbqT289XupSnN5-STe7HafyKpf#scrollTo=SF-n4IdjnoYk)
*RoBERTuito* is a pre-trained language model for user-generated content in Spanish, trained following RoBERTa guidelines on 500 million tweets. *RoBERTuito* comes in 3 flavors: cased, uncased, and uncased+deaccented.
We tested *RoBERTuito* on a benchmark of tasks involving user-generated text in Spanish. It outperforms other pre-trained language models for this language such as *BETO*, *BERTin* and *RoBERTa-BNE*. The 4 tasks selected for evaluation were: Hate Speech Detection (using SemEval 2019 Task 5, HatEval dataset), Sentiment and Emotion Analysis (using TASS 2020 datasets), and Irony detection (using IrosVa 2019 dataset).
| model | hate speech | sentiment analysis | emotion analysis | irony detection | score |
|:-------------------|:----------------|:---------------------|:-------------------|:-----------------|---------:|
| robertuito-uncased | 0.801 ± 0.010 | 0.707 ± 0.004 | 0.551 ± 0.011 | 0.736 ± 0.008 | 0.6987 |
| robertuito-deacc | 0.798 ± 0.008 | 0.702 ± 0.004 | 0.543 ± 0.015 | 0.740 ± 0.006 | 0.6958 |
| robertuito-cased | 0.790 ± 0.012 | 0.701 ± 0.012 | 0.519 ± 0.032 | 0.719 ± 0.023 | 0.6822 |
| roberta-bne | 0.766 ± 0.015 | 0.669 ± 0.006 | 0.533 ± 0.011 | 0.723 ± 0.017 | 0.6726 |
| bertin | 0.767 ± 0.005 | 0.665 ± 0.003 | 0.518 ± 0.012 | 0.716 ± 0.008 | 0.6666 |
| beto-cased | 0.768 ± 0.012 | 0.665 ± 0.004 | 0.521 ± 0.012 | 0.706 ± 0.007 | 0.6651 |
| beto-uncased | 0.757 ± 0.012 | 0.649 ± 0.005 | 0.521 ± 0.006 | 0.702 ± 0.008 | 0.6571 |
We release the pre-trained models on huggingface model hub:
- [RoBERTuito uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased)
- [RoBERTuito cased](https://huggingface.co/pysentimiento/robertuito-base-cased)
- [RoBERTuito deacc](https://huggingface.co/pysentimiento/robertuito-base-deacc)
## Masked LM
To test the masked LM, take into account that space is encoded inside SentencePiece's tokens. So, if you want to test
```
Este es un día<mask>
```
don't put a space between `día` and `<mask>`
## Usage
**IMPORTANT -- READ THIS FIRST**
*RoBERTuito* is not yet fully-integrated into `huggingface/transformers`. To use it, first install `pysentimiento`
```bash
pip install pysentimiento
```
and preprocess text using `pysentimiento.preprocessing.preprocess_tweet` before feeding it into the tokenizer
```python
from transformers import AutoTokenizer
from pysentimiento.preprocessing import preprocess_tweet
tokenizer = AutoTokenizer.from_pretrained('pysentimiento/robertuito-base-cased')
text = "Esto es un tweet estoy usando #Robertuito @pysentimiento 🤣"
preprocessed_text = preprocess_tweet(text, ha)
tokenizer.tokenize(preprocessed_text)
# ['<s>','▁Esto','▁es','▁un','▁tweet','▁estoy','▁usando','▁','▁hashtag','▁','▁ro','bert','uito','▁@usuario','▁','▁emoji','▁cara','▁revolviéndose','▁de','▁la','▁risa','▁emoji','</s>']
```
We are working on integrating this preprocessing step into a Tokenizer within `transformers` library
Check a text classification example in this notebook: [](https://colab.research.google.com/drive/1WcubR0kbqT289XupSnN5-STe7HafyKpf#scrollTo=SF-n4IdjnoYk)
## Citation
If you use *RoBERTuito*, please cite our paper:
```bibtex
@inproceedings{perez-etal-2022-robertuito,
title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish",
author = "P{\'e}rez, Juan Manuel and
Furman, Dami{\'a}n Ariel and
Alonso Alemany, Laura and
Luque, Franco M.",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.785",
pages = "7235--7243",
abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.",
}
```
|
kasunw/rl_course_vizdoom_health_gathering_supreme
|
kasunw
| 2023-05-22T14:03:49Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T14:03:43Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.02 +/- 3.51
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r kasunw/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Leon1010/chinese-vicuna-7b
|
Leon1010
| 2023-05-22T13:59:16Z | 0 | 4 | null |
[
"Chinese",
"Vicuna",
"7B",
"LLaMa",
"text-generation",
"zh",
"license:other",
"region:us"
] |
text-generation
| 2023-05-20T11:08:09Z |
---
license: other
language:
- zh
tags:
- Chinese
- Vicuna
- 7B
- LLaMa
pipeline_tag: text-generation
---
chinese-vicuna-7b是一个基于中文LLaMA模型和指令精调的Alpaca大模型的开源项目。该模型在原版Vicuna的基础上扩充了中文词表并使用了中文数据进行二次预训练,进一步提升了中文基础语义理解能力。与chinese-vicuna-13b相比,该模型的规模更小,但仍然具备优秀的语义理解能力。该项目的目的是促进大模型在中文NLP社区的开放研究,为构建透明且开放的学术研究提供支持。
|
terzimert/M_gpt_v1.5
|
terzimert
| 2023-05-22T13:55:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-22T11:39:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: M_gpt_v1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# M_gpt_v1.5
This model is a fine-tuned version of [ai-forever/mGPT](https://huggingface.co/ai-forever/mGPT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5589
- Precision: 0.4836
- Recall: 0.2252
- F1: 0.3073
- Accuracy: 0.8959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.643 | 1.0 | 1532 | 0.4457 | 0.4 | 0.1450 | 0.2129 | 0.8911 |
| 0.4563 | 2.0 | 3065 | 0.5391 | 0.4667 | 0.1870 | 0.2670 | 0.8963 |
| 0.3724 | 3.0 | 4596 | 0.5589 | 0.4836 | 0.2252 | 0.3073 | 0.8959 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
sd-concepts-library/kitchenrobot
|
sd-concepts-library
| 2023-05-22T13:37:09Z | 0 | 0 | null |
[
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] | null | 2023-05-22T13:37:08Z |
---
license: mit
base_model: runwayml/stable-diffusion-v1-5
---
### KitchenRobot on Stable Diffusion
This is the `<keukenrobot>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
blackeys/q-Taxi-v3
|
blackeys
| 2023-05-22T13:24:02Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T13:24:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="blackeys/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Al00f/CharTurner
|
Al00f
| 2023-05-22T13:23:46Z | 0 | 0 | null |
[
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] | null | 2023-05-22T13:03:07Z |
---
license: openrail
base_model: runwayml/stable-diffusion-v1-5
---
|
MannSingh/LunarLander-v2
|
MannSingh
| 2023-05-22T13:20:54Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T13:20:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.84 +/- 17.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bbbcs/ppo-LunarLander-v2
|
bbbcs
| 2023-05-22T12:56:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T12:55:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.96 +/- 11.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
serkanBurakOrs/rl_course_vizdoom_health_gathering_supreme
|
serkanBurakOrs
| 2023-05-22T12:54:02Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T12:53:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.02 +/- 4.52
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r serkanBurakOrs/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jamiehudson/633_setfit_model_C_220523
|
jamiehudson
| 2023-05-22T12:48:54Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-22T12:48:40Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# jamiehudson/633_setfit_model_C_220523
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("jamiehudson/633_setfit_model_C_220523")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
blackeys/q-FrozenLake-v1-4x4-noSlippery
|
blackeys
| 2023-05-22T12:46:22Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T12:41:27Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="blackeys/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
itislulu/my_awesome_model
|
itislulu
| 2023-05-22T12:39:08Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T12:18:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6459
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.3665 | 0.2 |
| No log | 2.0 | 4 | 0.6459 | 0.8 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
|
RomixERR/Leps-so-vits
|
RomixERR
| 2023-05-22T12:36:59Z | 5 | 2 |
transformers
|
[
"transformers",
"music",
"ru",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T12:23:36Z |
---
license: openrail
language:
- ru
tags:
- music
---
ENG:
Voice model of singer Grigory Leps.
Trained on the voice selected from the songs. VoicePaw-So-Vits.
The songs were preliminarily cleaned from backing vocals, cut into pieces with the voice first by hand and then by the built-in So-Vits mechanism.
---
RUS:
Модель голоса Григория Лепса.
Тренеровка производилась в VoicePaw-So-Vits на голосовых семплах выделенных из песен.
Использовались следующие песни:
Я уеду жить в Лондон, Она не твоя, Орлы или вороны, Рюмка водки, Сымый лучший день, Водопадами, Я поднимаю руки, Я тебя не люблю.
Песни предварительно были почищены от бек-вокала, порезаны на куски с голосом сначала вручную а потом встроенным механизмом So-Vits.
|
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-7
|
Ioanaaaaaaa
| 2023-05-22T12:26:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T12:07:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism-7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism-7
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2813
- Accuracy: 0.8406
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0586 | 1.0 | 938 | 1.1347 | 0.8316 | 0.8316 |
| 0.0219 | 2.0 | 1876 | 1.2813 | 0.8406 | 0.8399 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Juardo/my_awesome_mind_model
|
Juardo
| 2023-05-22T12:13:10Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-05-22T10:37:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | nan | 0.0796 |
| No log | 1.87 | 7 | nan | 0.0796 |
| 95.4018 | 2.93 | 11 | nan | 0.0796 |
| 95.4018 | 4.0 | 15 | nan | 0.0796 |
| 95.4018 | 4.8 | 18 | nan | 0.0796 |
| 0.0 | 5.87 | 22 | nan | 0.0796 |
| 0.0 | 6.93 | 26 | nan | 0.0796 |
| 0.0 | 8.0 | 30 | nan | 0.0796 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lelleib/xlm-r-base-amazon-massive-intent-finetuned-clinc150
|
lelleib
| 2023-05-22T12:11:46Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-05-22T12:03:01Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlm-r-base-amazon-massive-intent-finetuned-clinc150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-r-base-amazon-massive-intent-finetuned-clinc150
This model is a fine-tuned version of [cartesinus/xlm-r-base-amazon-massive-intent](https://huggingface.co/cartesinus/xlm-r-base-amazon-massive-intent) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.1668 | 1.0 | 235 | 4.9327 |
| 4.7891 | 2.0 | 470 | 4.0434 |
| 4.139 | 3.0 | 705 | 3.8096 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
WALIDALI/viniciuslibya
|
WALIDALI
| 2023-05-22T12:07:41Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-22T11:55:37Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### viniciuslibya Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
phnghiapro/distilbert-base-uncased-finetuned-cola
|
phnghiapro
| 2023-05-22T12:07:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T11:26:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5282404248888111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5703
- Matthews Correlation: 0.5282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5235 | 1.0 | 535 | 0.5332 | 0.4098 |
| 0.3452 | 2.0 | 1070 | 0.4980 | 0.4899 |
| 0.2301 | 3.0 | 1605 | 0.5703 | 0.5282 |
| 0.1786 | 4.0 | 2140 | 0.7849 | 0.5126 |
| 0.134 | 5.0 | 2675 | 0.8406 | 0.5185 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
zonghaoyang/BioLinkBERT-base
|
zonghaoyang
| 2023-05-22T12:05:07Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-20T23:36:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: BioLinkBERT-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioLinkBERT-base
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3937
- Accuracy: 0.9025
- F1: 0.6107
- Precision: 0.6765
- Recall: 0.5565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2363 | 1.0 | 1626 | 0.2699 | 0.9057 | 0.5991 | 0.7205 | 0.5127 |
| 0.1832 | 2.0 | 3252 | 0.3328 | 0.9038 | 0.6233 | 0.675 | 0.5789 |
| 0.1324 | 3.0 | 4878 | 0.3937 | 0.9025 | 0.6107 | 0.6765 | 0.5565 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
gabluz/ggml-whisper-large-quantized
|
gabluz
| 2023-05-22T12:01:59Z | 0 | 7 | null |
[
"region:us"
] | null | 2023-05-19T07:33:26Z |
# How to run this ggml file?
WARNING: this can be slow and CPU intensive
Clone whisper.cpp repository:
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
make
place this model called large_q5_0.bin inside whisper.cpp/models folder and run:
./main -m models/large_q5_0.bin yourfilename.wav
# Command to transcribe to SRT subtitle files:
./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress
# Command to transcribe to TRANSLATED (to English) SRT subtitle files:
./main -m models/large_q5_0.bin yourfilename.wav --output-srt --print-progress --translate
It can transcribe ONLY wav files!
# Command line to convert mp4 (works for any video, just change the extension) to wav:
ffmpeg -i yourfilename.mp4 -vn -acodec pcm_s16le -ar 16000 -ac 2 yourfilename.wav
# Command to convert all mp4 (works for any video, just change the extension) inside folder to wav:
find . -type f -iname "*.mp4" -exec bash -c 'ffmpeg -i "$0" -vn -acodec pcm_s16le -ar 16000 -ac 2 "${0%.*}.wav"' {} \;
|
GeneZC/bert-large-qqp
|
GeneZC
| 2023-05-22T11:48:22Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T11:17:01Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `QQP`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9178, f1: 0.8895
|
auser/pybrain
|
auser
| 2023-05-22T11:45:12Z | 0 | 0 |
generic
|
[
"generic",
"audio",
"automatic-speech-recognition",
"endpoints-template",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-22T11:43:33Z |
---
license: gpl-3.0
tags:
- audio
- automatic-speech-recognition
- endpoints-template
library_name: generic
inference: false
duplicated_from: tomiwa1a/video-search
---
# Video Search
This project contains 3 different models that can be used for searching videos.
1. Whisper to convert mp3 files to audio
2. BART Sentence Transformer to generate vector embeddings from text
3. BART LFQA to generate long form answers given a context
|
Randikariskyrazak/sten
|
Randikariskyrazak
| 2023-05-22T11:39:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T11:37:23Z |
---
license: creativeml-openrail-m
---
|
Amirkid/karparthy-test
|
Amirkid
| 2023-05-22T11:35:33Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T11:35:33Z |
---
license: creativeml-openrail-m
---
|
scribe-project/wav2vec2-large-voxrex-300m-radio
|
scribe-project
| 2023-05-22T11:33:36Z | 37 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-01-07T15:43:49Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
This is a wav2vec2 model fined tuned on a Norwegian dataset from the radio broadcasting corpus.
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
The model can be used for automatic speech recognition in Norwegian, and other tasks involving speech technology
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** The SCRIBE project https://scribe-project.github.io/
- **Shared by [optional]:** The SCRIBE project https://scribe-project.github.io/
- **Model type:** wav2vec2
- **Language(s) (NLP):** Norwegian
- **License:** Apache 2.0
- **Finetuned from model [optional]:** KBLab/wav2vec2-large-voxrex
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/scribe-project/nodalida_2023_combined_training
- **Paper [optional]:**
```
@InProceedings{SolbergEtAlNoDaLiDa2023,
author = {Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbjørn Svendsen and Giampiero Salvi},
title = {Improving Generalization of Norwegian ASR with Limited Linguistic Resources},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics},
year = {2023},
month = {May},
address = {Tórshavn, Faroe Islands},
}
```
## Uses
The model can be used for automatic speech recognition in Norwegian, and other tasks involving speech technology
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
scribe-project/wav2vec2-large-voxrex-300m-combined-short
|
scribe-project
| 2023-05-22T11:26:01Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-23T22:47:07Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
The model is fined tuned from a Swedish model with 300 million parameters trained by the Swedish Royal Library.
- **Developed by:** The SCRIBE project https://scribe-project.github.io/
- **Shared by [optional]:** The SCRIBE project https://scribe-project.github.io/
- **Model type:** wav2vec2
- **Language(s) (NLP):** Norwegian
- **License:** Apache 2.0
- **Finetuned from model [optional]:** KBLab/wav2vec2-large-voxrex
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/scribe-project/nodalida_2023_combined_training
- **Paper [optional]:**
```
@InProceedings{SolbergEtAlNoDaLiDa2023,
author = {Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and Torbjørn Svendsen and Giampiero Salvi},
title = {Improving Generalization of Norwegian ASR with Limited Linguistic Resources},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics},
year = {2023},
month = {May},
address = {Tórshavn, Faroe Islands},
}
```
## Uses
The model can be used for automatic speech recognition in Norwegian, and other tasks involving speech technology
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
https://github.com/scribe-project/nodalida_2023_combined_training
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ashish-soni08/ppo-Lunar-Lander-v2
|
ashish-soni08
| 2023-05-22T11:20:27Z | 8 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T11:20:10Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.66 +/- 20.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-4
|
Ioanaaaaaaa
| 2023-05-22T11:20:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T10:59:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7202
- Accuracy: 0.8406
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.14 | 1.0 | 1876 | 0.7202 | 0.8406 | 0.8399 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
GeneZC/bert-large-qnli
|
GeneZC
| 2023-05-22T11:16:38Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T11:05:31Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `QNLI`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9224
|
Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora_v2
|
Smoden
| 2023-05-22T11:08:42Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-22T07:23:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora_v2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
GeneZC/bert-large-mrpc
|
GeneZC
| 2023-05-22T11:04:54Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T10:53:28Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `MRPC`.
## Parameter settings
batch size is 16, learning rate is 3e-5.
## Metrics
acc: 0.8922, f1: 0.9225
|
pavankrishna/news_sentiment_analysis
|
pavankrishna
| 2023-05-22T10:54:48Z | 0 | 1 | null |
[
"license:openrail",
"region:us"
] | null | 2023-05-09T17:09:39Z |
---
license: openrail
---
Model Card for Sentiment Analysis Model
Model Details
Model Name: Sentiment Analysis Model(News)
Model Version: 1.01.1
Model Type: Natural Language Processing (NLP)
Model Architecture: Bidirectional LSTM with Embedding layer
Model Input: Preprocessed text data
Model Output: Predicted sentiment category (positive, negative, or neutral)
Model Performance
Accuracy: 0.95
Loss: 0.35
Dataset: News articles dataset
Dataset Size: 8.5m samples
Performance Metrics: Categorical cross-entropy loss and accuracy
Model Intended Use
The model is intended to be used for sentiment analysis of news article headlines.
The model is not intended to be used for sentiment analysis of other types of text data, such as social media posts or product reviews.
The model is intended for use in research and development projects related to sentiment analysis.
Ethical Considerations
The model was trained on a dataset of news articles, which may contain biased or subjective language.
The model may produce biased results for certain types of news articles or for certain demographics of readers.
The model does not take into account the context or background of the news article, which may impact the accuracy of the sentiment analysis.
The model should be used in conjunction with human review and interpretation to ensure that the sentiment analysis is accurate and appropriate.
Model Limitations
The model is limited by the quality and representativeness of the dataset on which it was trained.
The model may not perform well on news articles that contain highly complex or nuanced language.
The model does not take into account the tone or emotion of the news article, which may impact the sentiment analysis.
Model Training Details
Training Algorithm: Adam optimizer
Number of Epochs: 10
Training Time: 32 hours
Training Hardware: NVIDIA Tesla P100 GPU
Data Preprocessing: Tokenization, stopword removal, lemmatization, and padding
Data Augmentation: None
Validation Split: 20% of the dataset
Hyperparameters: Vocabulary size of 200,000, embedding dimension of 300, and sequence length of 100
License and Intellectual Property
The model and associated code are licensed under the Apache License 2.0.
The model was developed by [Pavan krishna Narne].
The model is open source and free to use for non-commercial purposes.
Commercial use of the model requires permission from [Pavan krishna Narne].
|
GeneZC/bert-large-mnlimm
|
GeneZC
| 2023-05-22T10:51:06Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T09:18:44Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `MNLI-mm`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.8613
|
lienK3/Taxi-v3
|
lienK3
| 2023-05-22T10:50:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T10:50:21Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.34 +/- 2.79
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lienK3/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
lienK3/q-FrozenLake-v1-4x4-noSlippery
|
lienK3
| 2023-05-22T10:47:03Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T10:46:58Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="lienK3/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RenauxLouis/merged-monet-mitchell-10000steps-688
|
RenauxLouis
| 2023-05-22T10:45:25Z | 3 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-20T19:25:57Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - RenauxLouis/merged-monet-mitchell-8000steps-688
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the merged-monet-mitchell-dataset dataset. You can find some example images in the following.




|
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-2
|
Ioanaaaaaaa
| 2023-05-22T10:27:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T00:55:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3597
- Accuracy: 0.8555
- F1: 0.8540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.43 | 1.0 | 1876 | 0.3597 | 0.8555 | 0.8540 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mlashcorp/red-pajama-3b-sagemaker
|
mlashcorp
| 2023-05-22T10:25:20Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T09:37:40Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
---
Adapted from togethercomputer/RedPajama-INCITE-Base-3B-v1 to run in AWS Sagemaker
|
avocardio/alpaca-lora-7b-german-base-52k
|
avocardio
| 2023-05-22T10:24:58Z | 0 | 14 |
transformers
|
[
"transformers",
"llama",
"alpaca",
"llm",
"finetune",
"german",
"de",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-20T17:50:35Z |
---
license: apache-2.0
language:
- de
tags:
- llama
- alpaca
- llm
- finetune
- german
- transformers
---
# Zicklein: german 🇩🇪 finetuned instruction LLaMA
Visit the Github for more information: https://github.com/avocardio/zicklein
## Usage
```python
from peft import PeftModel
from transformers import LLaMATokenizer, LLaMAForCausalLM, GenerationConfig
tokenizer = LLaMATokenizer.from_pretrained("decapoda-research/llama-7b-hf")
model = LLaMAForCausalLM.from_pretrained(
"decapoda-research/llama-7b-hf",
load_in_8bit=False,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, "avocardio/alpaca-lora-7b-german-base-52k")
```
|
Randikariskyrazak/DeepBoys
|
Randikariskyrazak
| 2023-05-22T10:20:38Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T09:20:07Z |
---
license: creativeml-openrail-m
---
|
Manaro/Taxi-v3
|
Manaro
| 2023-05-22T10:04:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T10:04:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Manaro/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Ztijn/robbertje-1-gb-non-shuffled-finetuned-squad
|
Ztijn
| 2023-05-22T10:02:48Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-22T08:27:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: robbertje-1-gb-non-shuffled-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robbertje-1-gb-non-shuffled-finetuned-squad
This model is a fine-tuned version of [DTAI-KULeuven/robbertje-1-gb-non-shuffled](https://huggingface.co/DTAI-KULeuven/robbertje-1-gb-non-shuffled) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6773 | 1.0 | 8326 | 1.5849 |
| 1.4207 | 2.0 | 16652 | 1.6746 |
| 1.2102 | 3.0 | 24978 | 1.6938 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Guigadal/layoutxlm-tc-finetuned
|
Guigadal
| 2023-05-22T09:44:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-22T08:57:46Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutxlm-tc-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutxlm-tc-finetuned
This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2621
- Answer Precision: 0.8634
- Answer Recall: 0.9369
- Answer F1: 0.8986
- Answer Number: 317
- Header Precision: 0.7391
- Header Recall: 0.7846
- Header F1: 0.7612
- Header Number: 65
- Question Precision: 0.8589
- Question Recall: 0.9045
- Question F1: 0.8811
- Question Number: 377
- Overall Precision: 0.8506
- Overall Recall: 0.9078
- Overall F1: 0.8783
- Overall Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5500
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
silver18723/ppo-LunarLander-v2
|
silver18723
| 2023-05-22T09:36:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T09:06:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.29 +/- 21.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
intfloat/simlm-base-msmarco-finetuned
|
intfloat
| 2023-05-22T09:35:23Z | 7,173 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2207.02578",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-04T09:21:28Z |
---
license: mit
language:
- en
---
# SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval
paper available at [https://arxiv.org/pdf/2207.02578](https://arxiv.org/pdf/2207.02578)
code available at [https://github.com/microsoft/unilm/tree/master/simlm](https://github.com/microsoft/unilm/tree/master/simlm)
## Paper abstract
In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
We use a replaced language modeling objective, which is inspired by ELECTRA,
to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.
## Results on MS-MARCO passage ranking task
| Model | dev MRR@10 | dev R@50 | dev R@1k | TREC DL 2019 nDCG@10 | TREC DL 2020 nDCG@10 |
|--|---|---|---|---|---|
| RocketQAv2 | 38.8 | 86.2 | 98.1 | - | - |
| coCondenser | 38.2 | 86.5 | 98.4 | 71.7 | 68.4 |
| ColBERTv2 | 39.7 | 86.8 | 98.4 | - | - |
| **SimLM (this model)** | 41.1 | 87.8 | 98.7 | 71.4 | 69.7 |
## Usage
Get embeddings from our fine-tuned model:
```python
import torch
from transformers import AutoModel, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
from transformers.modeling_outputs import BaseModelOutput
def l2_normalize(x: torch.Tensor):
return torch.nn.functional.normalize(x, p=2, dim=-1)
def encode_query(tokenizer: PreTrainedTokenizerFast, query: str) -> BatchEncoding:
return tokenizer(query,
max_length=32,
padding=True,
truncation=True,
return_tensors='pt')
def encode_passage(tokenizer: PreTrainedTokenizerFast, passage: str, title: str = '-') -> BatchEncoding:
return tokenizer(title,
text_pair=passage,
max_length=144,
padding=True,
truncation=True,
return_tensors='pt')
tokenizer = AutoTokenizer.from_pretrained('intfloat/simlm-base-msmarco-finetuned')
model = AutoModel.from_pretrained('intfloat/simlm-base-msmarco-finetuned')
model.eval()
with torch.no_grad():
query_batch_dict = encode_query(tokenizer, 'what is qa')
outputs: BaseModelOutput = model(**query_batch_dict, return_dict=True)
query_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])
psg1 = 'Quality assurance (QA) is a process-centered approach to ensuring that a company or organization is providing the best possible products or services. It is related to quality control, which focuses on the end result, such as testing a sample of items from a batch after production.'
psg1_batch_dict = encode_passage(tokenizer, psg1)
outputs: BaseModelOutput = model(**psg1_batch_dict, return_dict=True)
psg1_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])
psg2 = 'The Super Bowl is typically four hours long. The game itself takes about three and a half hours, with a 30 minute halftime show built in.'
psg2_batch_dict = encode_passage(tokenizer, psg2)
outputs: BaseModelOutput = model(**psg2_batch_dict, return_dict=True)
psg2_embedding = l2_normalize(outputs.last_hidden_state[0, 0, :])
# Higher cosine similarity means they are more relevant
print(query_embedding.dot(psg1_embedding), query_embedding.dot(psg2_embedding))
```
## Citation
```bibtex
@article{Wang2022SimLMPW,
title={SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval},
author={Liang Wang and Nan Yang and Xiaolong Huang and Binxing Jiao and Linjun Yang and Daxin Jiang and Rangan Majumder and Furu Wei},
journal={ArXiv},
year={2022},
volume={abs/2207.02578}
}
```
|
MayIBorn/ft-sd15-iom_person
|
MayIBorn
| 2023-05-22T09:33:28Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-22T09:25:35Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of iom person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - MayIBorn/ft-sd15-iom_person
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of iom person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
vind/ppo-Worm
|
vind
| 2023-05-22T09:23:12Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Worm",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2023-05-22T09:23:06Z |
---
library_name: ml-agents
tags:
- Worm
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Find your model_id: vind/ppo-Worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora
|
Smoden
| 2023-05-22T09:23:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-22T07:00:44Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/newest_Alice_mix_wizard_mix_Chronicles_diff_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
Randikariskyrazak/DeepBoys_2.5D
|
Randikariskyrazak
| 2023-05-22T09:19:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T09:19:47Z |
---
license: creativeml-openrail-m
---
|
LearnItAnyway/llama-30b-hf-53q_4bit-128g_WVU
|
LearnItAnyway
| 2023-05-22T09:18:17Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T06:07:35Z |
---
license: other
---
# Model Card for llama-30b-hf-53q_4bit-128g_WVU
## Model Description
`llama-30b-hf-53q_4bit-128g_WVU` is a model based on the
Llama architecture with 30 billion parameters.
This model adopts a quantization in which the first 53 layers
of the decoder have been quantized with the [`gptq`](https://github.com/qwopqwop200/GPTQ-for-LLaMa) method,
which uses 4-bit precision and 128 groups.
Then, the last 7 decoder layers (1/8 of decoding layers), and lm_head have been fine-tuned using the [wizard_vicuna_70k_unfiltered dataset](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered), 1 epoch.
## Note
Quantization effectively reduces memory usage, however, it may result in differences in the parameters.
Additionally, fine-tuning only the last few layers lowers memory requirements for training but could lead to minor performance degradation.
Several alternatives exist for fine-tuning and quantizing the Llama models. The specific method utilized here—quantizing several layers,
followed by fine-tuning the last few layers—is designed to account for errors introduced during quantization (which sometimes can result in unexpected answers),
and enables the last few layers to be fine-tuned considering both the quantization error and the dataset.
It is worth mentioning that other methods may yield superior performance. For instance:
1. Fine-tuning the entire model for `X` epochs
2. Quantizing the first `K` layers
3. Fine-tuning the remaining layers for `Y` epochs
Nonetheless, as fine-tuning the entire model requires considerable resources (for example, 4 GPUs with 80GB VRAM is required for 7B LLaMa),
this model omit the first step from the method described above, and it works.
## Using the Model
To load the model, a custom `LlamaForCausalLM` is required.
You can find quantized llama [here](https://github.com/LearnItAnyway/quantized_llama).
## References
1. Meta - LLaMA
2. [WizardLM](https://github.com/nlpxucan/WizardLM)
3. [GPTQ for LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
4. [Wizard Vicuna Unfiltered Dataset](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
5. Various unlisted but great works, researches, and projects.
|
LearnItAnyway/llama-7b-hf-28q_4bit-128g_WVU
|
LearnItAnyway
| 2023-05-22T09:17:11Z | 34 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-22T01:13:21Z |
---
license: other
---
# Model Card for llama-7b-hf-28q_4bit-128g_WVU
## Model Description
`llama-7b-hf-28q_4bit-128g_WVU` is a model based on the
Llama architecture with 7 billion parameters.
This model adopts a quantization in which the first 28 layers
of the decoder have been quantized with the [`gptq`](https://github.com/qwopqwop200/GPTQ-for-LLaMa) method,
which uses 4-bit precision and 128 groups.
Then, the last 4 decoder layers (1/8 of decoding layers), and lm_head have been fine-tuned using the [wizard_vicuna_70k_unfiltered dataset](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered), 1 epoch.
## Note
Quantization effectively reduces memory usage, however, it may result in differences in the parameters.
Additionally, fine-tuning only the last few layers lowers memory requirements for training but could lead to minor performance degradation.
Several alternatives exist for fine-tuning and quantizing the Llama models. The specific method utilized here—quantizing several layers,
followed by fine-tuning the last few layers—is designed to account for errors introduced during quantization (which sometimes can result in unexpected answers),
and enables the last few layers to be fine-tuned considering both the quantization error and the dataset.
It is worth mentioning that other methods may yield superior performance. For instance:
1. Fine-tuning the entire model for `X` epochs
2. Quantizing the first `K` layers
3. Fine-tuning the remaining layers for `Y` epochs
Nonetheless, as fine-tuning the entire model requires considerable resources (for example, 4 GPUs with 80GB VRAM is required for 7B LLaMa),
this model omit the first step from the method described above, and it works.
## Using the Model
To load the model, a custom `LlamaForCausalLM` is required.
You can find quantized llama [here](https://github.com/LearnItAnyway/quantized_llama).
## References
1. Meta - LLaMA
2. [WizardLM](https://github.com/nlpxucan/WizardLM)
3. [GPTQ for LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
4. [Wizard Vicuna Unfiltered Dataset](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
5. Various unlisted but great works, researches, and projects.
|
GeneZC/bert-large-mnli
|
GeneZC
| 2023-05-22T09:16:58Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T09:04:45Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-large-uncased` finetuned on `MNLI`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.8660
|
Antonnnekke/w2v2-libri-10min
|
Antonnnekke
| 2023-05-22T09:14:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-05-22T09:08:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: w2v2-libri-10min
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v2-libri-10min
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2500
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.18.3
- Tokenizers 0.13.3
|
mathiasbjorgum/fine_tune_destilbert_test
|
mathiasbjorgum
| 2023-05-22T09:13:40Z | 182 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-22T08:38:31Z |
This is a fine-tuned DistilBERT model to predict directional stock movements 20 minutes after a news article was published. This model is designed to be applied to article titles/headlines.
|
chanchongwei/fsl-mpnet-base-v2
|
chanchongwei
| 2023-05-22T09:10:38Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-22T06:33:16Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# chanchongwei/fsl-mpnet-base-v2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("chanchongwei/fsl-mpnet-base-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
vind/ppo-PyramidsRND-1M
|
vind
| 2023-05-22T09:03:28Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-05-22T09:03:22Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: vind/ppo-PyramidsRND-1M
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Akunxxx/Mjeyincaaa
|
Akunxxx
| 2023-05-22T09:02:04Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T08:59:41Z |
---
license: creativeml-openrail-m
---
|
headlesstech/bengali_dpr
|
headlesstech
| 2023-05-22T08:45:08Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-05-22T08:37:55Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 30470 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6094,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GeneZC/bert-base-stsb
|
GeneZC
| 2023-05-22T08:39:09Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T08:26:43Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `STS-b`.
## Parameter settings
batch size is 16, learning rate is 3e-5.
## Metrics
pearson_corr: 0.8742, spearman_corr: 0.8707
|
GeneZC/bert-base-qqp
|
GeneZC
| 2023-05-22T08:37:40Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T06:28:08Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `QQP`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.9140, f1: 0.8840
|
GeneZC/bert-base-mnli
|
GeneZC
| 2023-05-22T08:34:39Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T05:53:49Z |
---
license: apache-2.0
datasets:
- glue
---
# Model Details
`bert-base-uncased` finetuned on `MNLI`.
## Parameter settings
batch size is 32, learning rate is 2e-5.
## Metrics
acc: 0.8491
|
potetofry/ren
|
potetofry
| 2023-05-22T08:13:28Z | 35 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-22T07:41:29Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ren Dreambooth model trained by potetofry with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
aseljayasooriya/sl-law-roberta-5
|
aseljayasooriya
| 2023-05-22T08:00:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-05-22T07:28:23Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: sl-law-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sl-law-roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
abid/indonesia-bioner
|
abid
| 2023-05-22T07:56:44Z | 23 | 5 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"id",
"en",
"doi:10.57967/hf/3559",
"license:bsd-3-clause",
"region:us"
] |
token-classification
| 2022-05-11T18:55:55Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
language:
- id
- en
license: bsd-3-clause
widget:
- text: 'Dok saya mau tanya kenapa ya kulit saya kering bersisik gitu dok. Apalagi bagian tumit sampai nglupas terus gatal. Penyebabnya apa y dok terus cara mengobatinya gimana? Terima kasi'
- text: 'halo dok saya mau bertanya saya sering merasa cemas resah dan takut akan semua yg saya lakukan dan kejar , padahal aktifitas sehari hari berjalan lancar pdahal saya di kantor cukup terbilang sebagai karyawan terbaik tetapi saya merasa terbebani dengan cemas dan rasa takut itu sendiri'
- text: 'Does anyone else feel like their losing there mind with all the hormonal changes? One minute Im all happy and then Im crying. Tumor was seen in 2014 and I was never told. Lots of other surgeries, they have already told me surgery needs to done. This would be around my 20th surgery. Alot of different parts of body have been medically altered and this time its all my chose on what i want to do. Im opting to just let it all go and let god do what he needs to with me. Im not scared for myself but for my family and people I love.'
---
## Biomedical Entity Recognition in Bahasa Indonesia
Summary:
- Trained using manually annotated data from alodokter.com (online health QA platform) using UMLS guideline (see https://rdcu.be/cNxV3)
- Recognize disorders (DISO) and anatomy (ANAT) entities
- Achieve best F1 macro score 0.81
- Based on XLM-Roberta. So, cross lingual recognition might work.
## CITATION
This work is done with generous support from Safitri Juanita, Dr. Diana Purwitasari and Dr. Mauridhi Hery Purnomo from Institut Teknologi Sepuluh Nopember, Indonesia.
Citation for academic purpose will be provided later.
In the meantime, please let me know whenever you use this model (mail to : abid(dot)famasya(at)gmail.com) :)
For demo, please go to the HF space demo: https://huggingface.co/spaces/abid/id-bioner-demo
|
firefistape/ppo-LunarLander-v2
|
firefistape
| 2023-05-22T07:54:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T07:54:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.80 +/- 25.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
istinetz/q-FrozenLake-v1-4x4-noSlippery
|
istinetz
| 2023-05-22T07:30:29Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T07:30:25Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="istinetz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
oyesaurav/dwellbert
|
oyesaurav
| 2023-05-22T07:30:04Z | 63 | 1 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"ditilbert",
"text classification",
"clinical notes",
"wellnation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-18T17:39:42Z |
---
language:
- en
tags:
- ditilbert
- text classification
- clinical notes
- wellnation
---
<pre>
labels map =
{
"0": "Gastroenterology",
"1": "Neurology",
"2": "Orthopedic",
"3": "Radiology",
"4": "Urology"
}
</pre>
<h2><i>The fine tuned model has been trained on around 2300 medical transcriptions, to classify medical specialty.
More classes will be added as data would be available.</i></h2>
|
Afsara/cse_buet_bangla_t5
|
Afsara
| 2023-05-22T07:23:22Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"bn",
"arxiv:2205.11081",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-22T07:14:46Z |
---
language:
- bn
licenses:
- cc-by-nc-sa-4.0
---
# BanglaT5
This repository contains the pretrained checkpoint of the model **BanglaT5**. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve state-of-the-art results on many of the NLG tasks in bengali.
For finetuning on different downstream tasks such as `Machine Translation`, `Abstractive Text Summarization`, `Question Answering` etc., refer to the scripts in the official GitHub [repository](https://github.com/csebuetnlp/BanglaNLG).
**Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
## Using this model in `transformers` (tested on 4.11.0.dev0)
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5")
tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5", use_fast=False)
input_sentence = ""
input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids
generated_tokens = model.generate(input_ids)
decoded_tokens = tokenizer.batch_decode(generated_tokens)[0]
print(decoded_tokens)
```
## Benchmarks
* Supervised fine-tuning
| Model | Params | MT (SacreBLEU) | TS (ROUGE-2) | QA (EM/F1) | MD (SacreBLEU-1) | NHG (ROUGE-2) | XLS (ROUGE-2) | BNLG score |
|--------------------|------------|-----------------------|------------------------|-------------------|--------------------|----------------|----------------|---------------|
|[mT5 (base)](https://huggingface.co/google/mt5-base) | 582M | 36.6/22.5 | 10.3 | 59.0/65.3 | 17.5 | 9.6 | 2.7/0.7 | 24.9 |
|[XLM-ProphetNet](https://huggingface.co/microsoft/xprophetnet-large-wiki100-cased) | 616M | 23.3/16.4 | 7.8 | 53.0/57.3 | 20.0 | 9.5 | 6.2/2.7 | 21.8 |
|[mBART-50](https://huggingface.co/facebook/mbart-large-50) | 611M | 23.6/16.7 | 10.4 | 53.4/58.9 | 18.5 | 11.2 | 5.4/3.7 | 22.4 |
|[IndicBART](https://huggingface.co/ai4bharat/IndicBART) | 244M | 22.7/13.1 | 8.1 | 53.3/58.8 | 14.8 | 7.9 | 6.3/2.5 | 20.8 |
|[BanglaT5](https://huggingface.co/csebuetnlp/banglat5) | 247M | 38.8/25.2 | 13.7 | 68.5/74.8 | 19.0 | 13.8 | 6.4/4.0 | 29.4 |
The benchmarking datasets are as follows:
* **MT:** **[Machine Translation](https://github.com/csebuetnlp/banglanmt#datasets)**
* **TS:** **[Abstractive Text Summarization](https://huggingface.co/datasets/csebuetnlp/xlsum)**
* **QA:** **[Question Answering](https://huggingface.co/datasets/csebuetnlp/squad_bn)**
* **MD:** **[Multi Turn Dialogue Generation](https://drive.google.com/file/d/1qPmNN6qA4evbh4cD_BDDTCFOwMu4H2JS/view?usp=sharing)**
* **NHG:** **[News Headline Generation](https://huggingface.co/datasets/csebuetnlp/xlsum)**
* **XLS:** **[Cross-lingual Summarization](https://huggingface.co/datasets/csebuetnlp/CrossSum)**
## Citation
If you use this model, please cite the following paper:
```
@article{bhattacharjee2022banglanlg,
author = {Abhik Bhattacharjee and Tahmid Hasan and Wasi Uddin Ahmad and Rifat Shahriyar},
title = {BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla},
journal = {CoRR},
volume = {abs/2205.11081},
year = {2022},
url = {https://arxiv.org/abs/2205.11081},
eprinttype = {arXiv},
eprint = {2205.11081}
}
```
If you use the normalization module, please cite the following paper:
```
@inproceedings{hasan-etal-2020-low,
title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
author = "Hasan, Tahmid and
Bhattacharjee, Abhik and
Samin, Kazi and
Hasan, Masum and
Basak, Madhusudan and
Rahman, M. Sohel and
Shahriyar, Rifat",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
doi = "10.18653/v1/2020.emnlp-main.207",
pages = "2612--2623",
abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
}
```
|
leonhe/ppo-Huggy
|
leonhe
| 2023-05-22T07:22:50Z | 18 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-05-22T07:22:43Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: leonhe/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HasinMDG/mpnet-base-v2-multilingual-IPTC-L1
|
HasinMDG
| 2023-05-22T07:15:16Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-05-06T12:26:52Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/mpnet-base-v2-multilingual-IPTC-L1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/mpnet-base-v2-multilingual-IPTC-L1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
facebook/dino-vits16
|
facebook
| 2023-05-22T07:05:10Z | 62,662 | 14 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-feature-extraction",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.14294",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-feature-extraction
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (small-sized model, patch size 16) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('facebook/dino-vits16')
model = ViTModel.from_pretrained('facebook/dino-vits16')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
facebook/dino-vitb16
|
facebook
| 2023-05-22T07:04:00Z | 214,283 | 107 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"vit",
"image-feature-extraction",
"dino",
"vision",
"dataset:imagenet-1k",
"arxiv:2104.14294",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-feature-extraction
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- dino
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (base-sized model, patch size 16) trained using DINO
Vision Transformer (ViT) model trained using the DINO method. It was introduced in the paper [Emerging Properties in Self-Supervised Vision Transformers](https://arxiv.org/abs/2104.14294) by Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin and first released in [this repository](https://github.com/facebookresearch/dino).
Disclaimer: The team releasing DINO did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = ViTImageProcessor.from_pretrained('facebook/dino-vitb16')
model = ViTModel.from_pretrained('facebook/dino-vitb16')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2104-14294,
author = {Mathilde Caron and
Hugo Touvron and
Ishan Misra and
Herv{\'{e}} J{\'{e}}gou and
Julien Mairal and
Piotr Bojanowski and
Armand Joulin},
title = {Emerging Properties in Self-Supervised Vision Transformers},
journal = {CoRR},
volume = {abs/2104.14294},
year = {2021},
url = {https://arxiv.org/abs/2104.14294},
archivePrefix = {arXiv},
eprint = {2104.14294},
timestamp = {Tue, 04 May 2021 15:12:43 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-14294.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Smoden/newest_Alice_mix_wizard_diff_lora
|
Smoden
| 2023-05-22T06:53:36Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-05-22T02:01:43Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - Smoden/newest_Alice_mix_wizard_diff_lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the None dataset. You can find some example images in the following.
|
lucasfelezdev/TextToImage
|
lucasfelezdev
| 2023-05-22T06:48:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"en",
"fr",
"dataset:poloclub/diffusiondb",
"region:us"
] | null | 2023-05-22T06:43:07Z |
---
datasets:
- poloclub/diffusiondb
language:
- en
- fr
metrics:
- bertscore
library_name: diffusers
---
|
fionaxzf/billsum_model
|
fionaxzf
| 2023-05-22T06:41:07Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-22T06:24:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5047
- Rouge1: 0.1438
- Rouge2: 0.0514
- Rougel: 0.1198
- Rougelsum: 0.1196
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7922 | 0.1323 | 0.0416 | 0.1123 | 0.1121 | 19.0 |
| No log | 2.0 | 124 | 2.5856 | 0.1358 | 0.0455 | 0.114 | 0.114 | 19.0 |
| No log | 3.0 | 186 | 2.5226 | 0.1403 | 0.0485 | 0.1165 | 0.1166 | 19.0 |
| No log | 4.0 | 248 | 2.5047 | 0.1438 | 0.0514 | 0.1198 | 0.1196 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
michaelfeil/ct2fast-codegen2-7B
|
michaelfeil
| 2023-05-22T06:31:51Z | 5 | 3 |
transformers
|
[
"transformers",
"ctranslate2",
"int8",
"float16",
"arxiv:2305.02309",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-05-22T05:43:20Z |
---
tags:
- ctranslate2
- int8
- float16
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
quantized version of [Salesforce/codegen2-7B](https://huggingface.co/Salesforce/codegen2-7B)
```bash
pip install hf-hub-ctranslate2>=2.0.8
```
Converted on 2023-05-22 using
```
ct2-transformers-converter --model Salesforce/codegen2-7B --output_dir /home/michael/tmp-ct2fast-codegen2-7B --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json special_tokens_map.json added_tokens.json configuration_codegen.py .gitattributes --quantization float16
```
Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-codegen2-7B"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = GeneratorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
# tokenizer=AutoTokenizer.from_pretrained("Salesforce/codegen2-7B")
)
outputs = model.generate(
text=["def print_hello_world():", "def hello_name(name:"],
max_length=64
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
# CodeGen2 (CodeGen2-7B)
## Model description
[CodeGen2](https://github.com/salesforce/CodeGen2) is a family of autoregressive language models for **program synthesis**, introduced in the paper:
[CodeGen2: Lessons for Training LLMs on Programming and Natural Languages](https://arxiv.org/abs/2305.02309) by Erik Nijkamp\*, Hiroaki Hayashi\*, Caiming Xiong, Silvio Savarese, Yingbo Zhou.
Unlike the original CodeGen model family (i.e., CodeGen1), CodeGen2 is capable of infilling, and supports more programming languages.
Four model sizes are released: `1B`, `3.7B`, `7B`, `16B`.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality.
### Causal sampling
For regular causal sampling, simply generate completions given the context:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
### Infill sampling
For **infill** sampling, we introduce three new special token types:
* `<mask_N>`: N-th span to be masked. In practice, use `<mask_1>` to where you want to sample infill.
* `<sep>`: Seperator token between the suffix and the infilled sample. See below.
* `<eom>`: "End-Of-Mask" token that model will output at the end of infilling. You may use this token to truncate the output.
For example, if we want to generate infill for the following cursor position of a function:
```python
def hello_world():
|
return name
```
we construct an input to the model by
1. Inserting `<mask_1>` token in place of cursor position
2. Append `<sep>` token to indicate the boundary
3. Insert another `<mask_1>` to indicate which mask we want to infill.
The final snippet looks as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen2-7B")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen2-7B", trust_remote_code=True, revision="main")
def format(prefix, suffix):
return prefix + "<mask_1>" + suffix + "<|endoftext|>" + "<sep>" + "<mask_1>"
prefix = "def hello_world():\n "
suffix = " return name"
text = format(prefix, suffix)
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=False)[len(text):])
```
You might want to truncate the model output with `<eom>`.
## Training data
This checkpoint is trained on the stricter permissive subset of [the deduplicated version of the Stack dataset (v1.1)](https://huggingface.co/datasets/bigcode/the-stack-dedup). Supported languages (and frameworks) are as follows:
`c`, `c++`, `c-sharp`, `dart`, `go`, `java`, `javascript`, `kotlin`, `lua`, `php`, `python`, `ruby`, `rust`, `scala`, `shell`, `sql`, `swift`, `typescript`, `vue`.
## Training procedure
CodeGen2 was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The input sequences are formatted in two ways: (1) causal language modeling and (2) file-level span corruption.
Please refer to the paper for more details.
## Evaluation results
We evaluate our models on HumanEval and HumanEval-Infill. Please refer to the [paper](https://arxiv.org/abs/2305.02309) for more details.
## Intended use and limitations
As an autoregressive language model, CodeGen2 is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2023codegen2,
title={CodeGen2: Lessons for Training LLMs on Programming and Natural Languages},
author={Nijkamp, Erik and Hayashi, Hiroaki and Xiong, Caiming and Savarese, Silvio and Zhou, Yingbo},
journal={arXiv preprint},
year={2023}
}
```
|
ehanJ/distilbert-base-uncased-finetuned-emotion
|
ehanJ
| 2023-05-22T06:25:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T06:20:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9205899308588681
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2240
- Accuracy: 0.9205
- F1: 0.9206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8441 | 1.0 | 250 | 0.3201 | 0.904 | 0.9018 |
| 0.2551 | 2.0 | 500 | 0.2240 | 0.9205 | 0.9206 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
seeoo/distilbert-base-uncased-finetuned-emotion
|
seeoo
| 2023-05-22T04:59:55Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T04:53:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2138
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8297 | 1.0 | 250 | 0.3079 | 0.905 | 0.9018 |
| 0.2463 | 2.0 | 500 | 0.2138 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
|
THUDM/ImageReward
|
THUDM
| 2023-05-22T04:12:22Z | 0 | 54 | null |
[
"text-to-image",
"en",
"arxiv:2304.05977",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-04-06T11:54:38Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-to-image
---
# ImageReward
<p align="center">
<a href="https://github.com/THUDM/ImageReward" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/thukeg" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.05977" target="_blank">Paper</a> <br>
</p>
**ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation**
ImageReward is the first general-purpose text-to-image human preference RM which is trained on in total 137k pairs of
expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB. We demonstrate that
ImageReward outperforms existing text-image scoring methods, such as CLIP, Aesthetic, and BLIP, in terms of
understanding human preference in text-to-image synthesis through extensive analysis and experiments.

## Quick Start
### Install Dependency
We have integrated the whole repository to a single python package `image-reward`. Following the commands below to prepare the environment:
```shell
# Clone the ImageReward repository (containing data for testing)
git clone https://github.com/THUDM/ImageReward.git
cd ImageReward
# Install the integrated package `image-reward`
pip install image-reward
```
### Example Use
We provide example images in the [`assets/images`](assets/images) directory of this repo. The example prompt is:
```text
a painting of an ocean with clouds and birds, day time, low depth field effect
```
Use the following code to get the human preference scores from ImageReward:
```python
import os
import torch
import ImageReward as reward
if __name__ == "__main__":
prompt = "a painting of an ocean with clouds and birds, day time, low depth field effect"
img_prefix = "assets/images"
generations = [f"{pic_id}.webp" for pic_id in range(1, 5)]
img_list = [os.path.join(img_prefix, img) for img in generations]
model = reward.load("ImageReward-v1.0")
with torch.no_grad():
ranking, rewards = model.inference_rank(prompt, img_list)
# Print the result
print("\nPreference predictions:\n")
print(f"ranking = {ranking}")
print(f"rewards = {rewards}")
for index in range(len(img_list)):
score = model.score(prompt, img_list[index])
print(f"{generations[index]:>16s}: {score:.2f}")
```
The output should be like as follow (the exact numbers may be slightly different depending on the compute device):
```
Preference predictions:
ranking = [1, 2, 3, 4]
rewards = [[0.5811622738838196], [0.2745276093482971], [-1.4131819009780884], [-2.029569625854492]]
1.webp: 0.58
2.webp: 0.27
3.webp: -1.41
4.webp: -2.03
```
## Citation
```
@misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
yfyeung/icefall-asr-finetune-mux-pruned_transducer_stateless7-2023-05-19
|
yfyeung
| 2023-05-22T04:06:46Z | 0 | 0 | null |
[
"tensorboard",
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-05-19T10:05:07Z |
---
license: apache-2.0
---
Introduction This repo contains pre-trained models, checkpoints, training logs and decoding results of the following pull-request:
https://github.com/k2-fsa/icefall/pull/1059
|
YakovElm/MariaDB15Classic
|
YakovElm
| 2023-05-22T04:04:54Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-20T16:23:07Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1835
- Train Accuracy: 0.9305
- Validation Loss: 0.1779
- Validation Accuracy: 0.9598
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2748 | 0.9264 | 0.1661 | 0.9598 | 0 |
| 0.2065 | 0.9297 | 0.1757 | 0.9598 | 1 |
| 0.1835 | 0.9305 | 0.1779 | 0.9598 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
leonhe/ppo-LunarLander-v2
|
leonhe
| 2023-05-22T04:03:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T04:03:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 225.17 +/- 41.05
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
qkids/ppo-LunarLander-v2
|
qkids
| 2023-05-22T03:48:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T03:48:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.85 +/- 23.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
iamanavk/qm_sum_t5-base
|
iamanavk
| 2023-05-22T03:42:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-22T03:14:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: qm_sum_t5-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qm_sum_t5-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2091
- Rouge1: 0.2135
- Rouge2: 0.0626
- Rougel: 0.1688
- Rougelsum: 0.1689
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 79 | 3.2969 | 0.2065 | 0.0591 | 0.1623 | 0.1621 | 18.9926 |
| No log | 2.0 | 158 | 3.2175 | 0.2173 | 0.067 | 0.1725 | 0.1725 | 19.0 |
| No log | 3.0 | 237 | 3.1909 | 0.2149 | 0.064 | 0.1716 | 0.1718 | 19.0 |
| No log | 4.0 | 316 | 3.2091 | 0.2135 | 0.0626 | 0.1688 | 0.1689 | 19.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
Pachosoad/Sad
|
Pachosoad
| 2023-05-22T03:13:58Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-05-22T03:13:58Z |
---
license: bigscience-openrail-m
---
|
YakovElm/MariaDB5Classic
|
YakovElm
| 2023-05-22T03:01:54Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-20T16:23:21Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2669
- Train Accuracy: 0.9013
- Validation Loss: 0.2763
- Validation Accuracy: 0.9322
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3419 | 0.8820 | 0.2456 | 0.9322 | 0 |
| 0.2844 | 0.8971 | 0.2508 | 0.9322 | 1 |
| 0.2669 | 0.9013 | 0.2763 | 0.9322 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chen311/angie
|
Chen311
| 2023-05-22T02:51:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-22T02:46:50Z |
---
license: creativeml-openrail-m
---
|
YakovElm/Jira20Classic
|
YakovElm
| 2023-05-22T02:40:02Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-20T16:24:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2068
- Train Accuracy: 0.9255
- Validation Loss: 0.2729
- Validation Accuracy: 0.9338
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3798 | 0.8657 | 0.2552 | 0.9338 | 0 |
| 0.2667 | 0.9003 | 0.2573 | 0.9338 | 1 |
| 0.2068 | 0.9255 | 0.2729 | 0.9338 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepspringer/my_bert_model_courses_and_subjects
|
deepspringer
| 2023-05-22T02:36:44Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-22T02:36:16Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my_bert_model_courses_and_subjects
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_bert_model_courses_and_subjects
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1188, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/Jira15Classic
|
YakovElm
| 2023-05-22T02:22:27Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-05-20T16:23:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4184
- Train Accuracy: 0.8027
- Validation Loss: 0.7165
- Validation Accuracy: 0.5331
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5149 | 0.7754 | 0.7342 | 0.5205 | 0 |
| 0.4648 | 0.7912 | 0.7246 | 0.5205 | 1 |
| 0.4184 | 0.8027 | 0.7165 | 0.5331 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
seank0602/videomae-base-finetuned-ucf101-subset
|
seank0602
| 2023-05-22T02:09:52Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-05-08T01:58:42Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5498
- Accuracy: 0.2407
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.483 | 0.25 | 76 | 1.5118 | 0.3409 |
| 1.3893 | 1.25 | 152 | 1.5561 | 0.2593 |
| 1.5231 | 2.25 | 228 | 1.5311 | 0.2721 |
| 1.5293 | 3.24 | 300 | 1.5498 | 0.2407 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
juanfkurucz/a2c-AntBulletEnv-v0
|
juanfkurucz
| 2023-05-22T02:02:09Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-05-22T02:01:05Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1695.53 +/- 123.47
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.