modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-07 18:30:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 544
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-07 18:30:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Edmon02/distilbert-base-uncased-finetuned-emotion
|
Edmon02
| 2023-06-26T12:34:11Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T11:45:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2281
- Accuracy: 0.9265
- F1: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8449 | 1.0 | 250 | 0.3233 | 0.9105 | 0.9053 |
| 0.2669 | 2.0 | 500 | 0.2281 | 0.9265 | 0.9263 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
|
prathith/q-FrozenLake-v1-4x4-noSlippery
|
prathith
| 2023-06-26T12:16:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T12:16:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="prathith/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kalyaniAI/autotrain-autotrain-69874137966
|
kalyaniAI
| 2023-06-26T12:08:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:kalyaniAI/autotrain-data-autotrain",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-26T12:07:46Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- kalyaniAI/autotrain-data-autotrain
co2_eq_emissions:
emissions: 0.025148621653341533
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 69874137966
- CO2 Emissions (in grams): 0.0251
## Validation Metrics
- Loss: 8.770
- Rouge1: 0.000
- Rouge2: 0.000
- RougeL: 0.000
- RougeLsum: 0.000
- Gen Len: 16.333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/kalyaniAI/autotrain-autotrain-69874137966
```
|
mikeyang01/chinese-LLaMA-Alpaca-7B-quantized
|
mikeyang01
| 2023-06-26T12:08:23Z | 0 | 6 | null |
[
"region:us"
] | null | 2023-05-06T12:44:10Z |
The model is converted according to the document below.<br>
https://github.com/ymcui/Chinese-LLaMA-Alpaca
在线模型合并与转换教程:<br>
https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/在线模型合并与转换 <br>
https://colab.research.google.com/drive/1FnFkyKhrnS7s-2lDDeous-AutdI_SkAd?usp=sharing
**Due to colab hardware limitation, many people may not convert Successfully, <br>
So I converted it and put it here**
|
namanjoshi123/tinyroberta-squad2
|
namanjoshi123
| 2023-06-26T12:07:19Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T12:02:32Z |
---
tags:
- generated_from_trainer
model-index:
- name: tinyroberta-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyroberta-squad2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
GregoRio123/ssmp
|
GregoRio123
| 2023-06-26T11:49:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T11:27:21Z |
---
license: creativeml-openrail-m
---
|
hypothetical/test_model
|
hypothetical
| 2023-06-26T11:42:45Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-26T11:42:45Z |
---
license: bigscience-openrail-m
---
|
fatcat22/Reinforce-PixelCopter
|
fatcat22
| 2023-06-26T11:39:26Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:03:31Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 25.90 +/- 20.71
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
afschowdhury/semantic-xlmr-bn
|
afschowdhury
| 2023-06-26T11:31:25Z | 26 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"dpr",
"bn",
"multilingual",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-02-01T11:40:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- dpr
- bn
- multilingual
widget:
- source_sentence: "আমি বাংলায় গান গাই"
sentences:
- "I sing in Bangla"
- "I sing in Bengali"
- "I sing in English"
- "আমি গান গাই না "
example_title: "Singing"
---
# `s-xlmr-bn`
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like **clustering** or **semantic search**.
<!--- Describe your model here -->
## Model Details
- Model name: s-xlmr-bn
- Model version: 1.0
- Architecture: Sentence Transformer
- Language: Multilingual ( fine-tuned for Bengali Language)
- Base Models:
- [paraphrase-distilroberta-base-v2](https://huggingface.co/sentence-transformers/paraphrase-distilroberta-base-v2) [Teacher Model]
- [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) [Student Model]
## Training
The model was fine-tuned using **Multilingual Knowledge Distillation** method. We took `paraphrase-distilroberta-base-v2` as the teacher model and `xlm-roberta-large` as the student model.

## Intended Use:
- **Primary Use Case:** Semantic similarity, clustering, and semantic searches
- **Potential Use Cases:** Document retrieval, information retrieval, recommendation systems, chatbot systems , FAQ system
## Usage
### Using Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
model = SentenceTransformer('afschowdhury/s-xlmr-bn')
embeddings = model.encode(sentences)
print(embeddings)
```
### Using HuggingFace Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["I sing in bengali", "আমি বাংলায় গান গাই"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('afschowdhury/s-xlmr-bn')
model = AutoModel.from_pretrained('afschowdhury/s-xlmr-bn')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
### Point of Contact
**Asif Faisal Chowdhury**
E-mail: [afschowdhury@gmail.com](mailto:afschowdhury@gmail.com) | Linked-in: [afschowdhury](https://www.linkedin.com/in/afschowdhury)
|
SumanTenzai/Dummy
|
SumanTenzai
| 2023-06-26T11:29:52Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T08:43:55Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dummy
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
christinakyp/whisper-tiny-train1
|
christinakyp
| 2023-06-26T10:50:11Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"asr",
"generated_from_trainer",
"en",
"dataset:christinakyp/dsing1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T09:22:12Z |
---
language:
- en
license: apache-2.0
tags:
- asr
- generated_from_trainer
datasets:
- christinakyp/dsing1
model-index:
- name: Whisper Tiny Sing - CK
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Sing - CK
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the DSing1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
davanstrien/CamemBERT-MedNERF
|
davanstrien
| 2023-06-26T10:42:55Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"token-classification",
"autotrain",
"medical",
"fr",
"dataset:Posos/MedNERF",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T10:07:00Z |
---
tags:
- autotrain
- token-classification
- medical
language:
- fr
widget:
- text: Prendré 2 compris par jour, pendant 1 mois.
- text: DOLIPRANETABS 1000 MG CPR PELL PLQ/8 (Paracétamol 1.000mg comprimé)
datasets:
- Posos/MedNERF
co2_eq_emissions:
emissions: 0.11647938304211661
license: mit
metrics:
- f1
- accuracy
- precision
- recall
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 69856137957
- CO2 Emissions (in grams): 0.1165
## Validation Metrics
- Loss: 1.510
- Accuracy: 0.706
- Precision: 0.648
- Recall: 0.679
- F1: 0.663
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-french-ner-blank-model-69856137957
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("davanstrien/autotrain-french-ner-blank-model-69856137957", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-french-ner-blank-model-69856137957", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
El-is-a-bat/ToonYou
|
El-is-a-bat
| 2023-06-26T10:19:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T09:51:24Z |
---
license: creativeml-openrail-m
---
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs
|
jondurbin
| 2023-06-26T10:12:58Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T16:40:13Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 3 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs
|
jondurbin
| 2023-06-26T10:12:48Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T16:17:04Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 4 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs
|
jondurbin
| 2023-06-26T10:12:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T08:37:14Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 6 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
Jade1211/textual_inversion_firework
|
Jade1211
| 2023-06-26T10:05:48Z | 9 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T07:06:25Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_firework
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
heka-ai/gpl-test
|
heka-ai
| 2023-06-26T10:04:16Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-26T10:04:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# heka-ai/gpl-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/gpl-test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('heka-ai/gpl-test')
model = AutoModel.from_pretrained('heka-ai/gpl-test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/gpl-test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Qasim30/taxi-v3-hugging
|
Qasim30
| 2023-06-26T09:52:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:52:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-hugging
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Qasim30/taxi-v3-hugging", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Sinu97/test_model
|
Sinu97
| 2023-06-26T09:52:28Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T09:31:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Sinu97/test_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Sinu97/test_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6932
- Validation Loss: 0.6869
- Train Accuracy: 0.8806
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6900 | 0.6869 | 0.8806 | 0 |
| 0.6901 | 0.6869 | 0.8806 | 1 |
| 0.6893 | 0.6869 | 0.8806 | 2 |
| 0.6913 | 0.6869 | 0.8806 | 3 |
| 0.6932 | 0.6869 | 0.8806 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
aliyzd95/wav2vec2-mms-1b-turkish
|
aliyzd95
| 2023-06-26T09:38:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T06:28:08Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-mms-1b-turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.20978449596568277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-mms-1b-turkish
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1443
- Wer: 0.2098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.2036 | 0.46 | 100 | 0.1980 | 0.2614 |
| 0.3 | 0.92 | 200 | 0.1918 | 0.2725 |
| 0.2735 | 1.38 | 300 | 0.1672 | 0.2346 |
| 0.2672 | 1.83 | 400 | 0.1671 | 0.2312 |
| 0.2641 | 2.29 | 500 | 0.1598 | 0.2248 |
| 0.2541 | 2.75 | 600 | 0.1587 | 0.2270 |
| 0.2696 | 3.21 | 700 | 0.1546 | 0.2235 |
| 0.2315 | 3.67 | 800 | 0.1559 | 0.2259 |
| 0.2396 | 4.13 | 900 | 0.1534 | 0.2172 |
| 0.2284 | 4.59 | 1000 | 0.1521 | 0.2172 |
| 0.2342 | 5.05 | 1100 | 0.1523 | 0.2178 |
| 0.2163 | 5.5 | 1200 | 0.1520 | 0.2184 |
| 0.2272 | 5.96 | 1300 | 0.1504 | 0.2182 |
| 0.2122 | 6.42 | 1400 | 0.1483 | 0.2149 |
| 0.2162 | 6.88 | 1500 | 0.1472 | 0.2100 |
| 0.2104 | 7.34 | 1600 | 0.1466 | 0.2104 |
| 0.2004 | 7.8 | 1700 | 0.1457 | 0.2110 |
| 0.2156 | 8.26 | 1800 | 0.1455 | 0.2134 |
| 0.1981 | 8.72 | 1900 | 0.1451 | 0.2103 |
| 0.1921 | 9.17 | 2000 | 0.1452 | 0.2105 |
| 0.19 | 9.63 | 2100 | 0.1443 | 0.2098 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Jade1211/textual_inversion_singer
|
Jade1211
| 2023-06-26T09:36:56Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T06:38:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_singer
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
imran90/MathBot2
|
imran90
| 2023-06-26T09:34:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T09:34:14Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ce-dric/taxi-v3
|
ce-dric
| 2023-06-26T09:13:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:13:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ce-dric/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ce-dric/q-FrozenLake-v1-4x4-noSlippery
|
ce-dric
| 2023-06-26T09:12:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:12:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ce-dric/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/disneyPixarCartoon_v10_2
|
casque
| 2023-06-26T08:52:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T08:22:04Z |
---
license: creativeml-openrail-m
---
|
Ashraf-kasem/ppo-LunarLander-v2
|
Ashraf-kasem
| 2023-06-26T08:52:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T08:51:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.18 +/- 17.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abelko/abel-alpaca
|
abelko
| 2023-06-26T08:39:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T08:39:41Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
imran90/MathBot
|
imran90
| 2023-06-26T08:37:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T08:37:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
ekimw/bert-finetuned-ner
|
ekimw
| 2023-06-26T08:33:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T08:21:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9332782824112303
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9420688505459699
- name: Accuracy
type: accuracy
value: 0.9867104256195914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9333
- Recall: 0.9510
- F1: 0.9421
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0887 | 1.0 | 1756 | 0.0687 | 0.9198 | 0.9339 | 0.9268 | 0.9819 |
| 0.0335 | 2.0 | 3512 | 0.0622 | 0.9216 | 0.9461 | 0.9337 | 0.9859 |
| 0.018 | 3.0 | 5268 | 0.0612 | 0.9333 | 0.9510 | 0.9421 | 0.9867 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
slone/bert-base-multilingual-cased-bak-rus-similarity
|
slone
| 2023-06-26T08:30:19Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"ba",
"ru",
"dataset:AigizK/bashkir-russian-parallel-corpora",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T19:32:58Z |
---
license: apache-2.0
datasets:
- AigizK/bashkir-russian-parallel-corpora
language:
- ba
- ru
pipeline_tag: text-classification
---
This is a text pair classifier, trained to predict whether a Bashkir sentence and a Russian sentence have the same meaning.
It can be used for filtering parallel corpora or evaluating machine translation quality.
It can be applied to predict scores like this:
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
clf_name = 'slone/bert-base-multilingual-cased-bak-rus-similarity'
clf = AutoModelForSequenceClassification.from_pretrained(clf_name)
clf_tokenizer = AutoTokenizer.from_pretrained(clf_name)
def classify(texts_ba, texts_ru):
with torch.inference_mode():
batch = clf_tokenizer(texts_ba, texts_ru, padding=True, truncation=True, max_length=512, return_tensors='pt').to(clf.device)
return torch.softmax(clf(**batch).logits.view(-1, 2), -1)[:, 1].cpu().numpy()
print(classify(['Сәләм, ғаләм!', 'Хәйерле көн, тыныслыҡ.'], ['Привет, мир!', 'Мама мыла раму.']))
# [0.96345973 0.02213471]
```
For most "good" sentence pairs, these scores are above 0.5.
|
Geotrend/bert-base-en-fr-cased
|
Geotrend
| 2023-06-26T08:20:44Z | 117 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language: multilingual
datasets: wikipedia
license: apache-2.0
widget:
- text: "Google generated 46 billion [MASK] in revenue."
- text: "Paris is the capital of [MASK]."
- text: "Algiers is the largest city in [MASK]."
- text: "Paris est la [MASK] de la France."
- text: "Paris est la capitale de la [MASK]."
- text: "L'élection américaine a eu [MASK] en novembre 2020."
---
# bert-base-en-fr-cased
We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages.
Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy.
For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-cased")
model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-cased")
```
To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact amine@geotrend.fr for any question, feedback or request.
|
ibm-research/otter_dude_transe
|
ibm-research
| 2023-06-26T08:09:08Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:56:43Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.576</td>
<td class="tg-c3ow">0.807</td>
<td class="tg-c3ow">0.570</td>
<td class="tg-c3ow">0.170</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.653</td>
<td class="tg-c3ow">0.604</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_transe --output_path output_path
```
|
ibm-research/otter_dude_distmult
|
ibm-research
| 2023-06-26T08:08:38Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:51:02Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.577</td>
<td class="tg-c3ow">0.805</td>
<td class="tg-c3ow">0.573</td>
<td class="tg-c3ow">0.132</td>
<td class="tg-c3ow">0.857</td>
<td class="tg-c3ow">0.650</td>
<td class="tg-c3ow">0.607</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_distmult --output_path output_path
```
|
ibm-research/otter_stitch_transe
|
ibm-research
| 2023-06-26T08:08:17Z | 0 | 2 | null |
[
"dataset:ibm/otter_stitch",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-13T15:59:30Z |
---
license: mit
inference: false
datasets:
- ibm/otter_stitch
---
# Otter STITCH TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over *STITCH*. STITCH (Search Tool for Interacting Chemicals) is a database of known and predicted interactions between chemicals represented by SMILES strings and proteins whose sequences are taken from STRING database. It contains 10,717,791 triples for 17,572 different chemicals and 1,886,496 different proteins. Furthermore, the graph was split into 5 roughly same size subgraphs and GNN was trained sequentially on each of them by upgrading the model trained using the previous subgraph.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.578</td>
<td class="tg-c3ow">0.814</td>
<td class="tg-c3ow">0.572</td>
<td class="tg-c3ow">0.119</td>
<td class="tg-c3ow">0.859</td>
<td class="tg-c3ow">0.636</td>
<td class="tg-c3ow">0.635</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_stitch_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_stitch_transe --output_path output_path
```
|
ibm-research/otter_primekg_distmult
|
ibm-research
| 2023-06-26T08:07:42Z | 0 | 3 | null |
[
"dataset:ibm/otter_primekg",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T10:31:11Z |
---
license: mit
inference: false
datasets:
- ibm/otter_primekg
---
# Otter PrimeKG DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over *PrimeKG* (the Precision Medicine Knowledge Graph). *PrimeKG* integrates 20 biomedical resources, describing 17,080 diseases with 4 million relationships. *PrimeKG* includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.575</td>
<td class="tg-c3ow">0.806</td>
<td class="tg-c3ow">0.571</td>
<td class="tg-c3ow">0.162</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.611</td>
<td class="tg-c3ow">0.617</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_primekg_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_primekg_distmult --output_path output_path
```
|
ibm-research/otter_primekg_transe
|
ibm-research
| 2023-06-26T08:07:19Z | 0 | 2 | null |
[
"dataset:ibm/otter_primekg",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T10:32:56Z |
---
license: mit
inference: false
datasets:
- ibm/otter_primekg
---
# Otter PrimeKG TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over *PrimeKG* (the Precision Medicine Knowledge Graph). *PrimeKG* integrates 20 biomedical resources, describing 17,080 diseases with 4 million relationships. *PrimeKG* includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.573</td>
<td class="tg-c3ow">0.807</td>
<td class="tg-c3ow">0.568</td>
<td class="tg-c3ow">0.186</td>
<td class="tg-c3ow">0.858</td>
<td class="tg-c3ow">0.642</td>
<td class="tg-c3ow">0.607</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_primekg_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_primekg_transe --output_path output_path
```
|
navndn/ppo-LunarLander-v2
|
navndn
| 2023-06-26T07:59:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:59:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.99 +/- 38.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yhna/q-learning-taxi-v3
|
yhna
| 2023-06-26T07:52:08Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:52:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yhna/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yhna/q-FrozenLake-v1-4x4-noSlippery
|
yhna
| 2023-06-26T07:50:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:50:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yhna/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LarryAIDraw/bluereflection_shirai-11
|
LarryAIDraw
| 2023-06-26T07:03:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:49:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/67268?modelVersionId=71905
|
LarryAIDraw/fuxuanV3c
|
LarryAIDraw
| 2023-06-26T07:02:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:47:02Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/60053/honkai-star-rail-fu-xuan-or
|
al123/my_qa_model
|
al123
| 2023-06-26T06:59:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T06:48:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: my_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_qa_model
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 4.4635 |
| 4.4252 | 2.0 | 500 | 4.4132 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
vineet1409/fine-tuned-AlBERT
|
vineet1409
| 2023-06-26T06:25:58Z | 9 | 0 |
bertopic
|
[
"bertopic",
"tf",
"albert",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-26T06:22:52Z |
---
library_name: bertopic
pipeline_tag: text-classification
---
|
sleepotimer/SweetParfait
|
sleepotimer
| 2023-06-26T06:25:35Z | 0 | 18 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-22T15:43:02Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# SweetParfait
CuteYukiKawaShow + MoreParfait + 3A + 2A
## Examples
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-1.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-2.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-3.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-4.png" width="768px">
|
vineet1409/fine-tuned-bioclinical-BERT
|
vineet1409
| 2023-06-26T06:19:10Z | 3 | 0 |
bertopic
|
[
"bertopic",
"tf",
"bert",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-26T06:11:05Z |
---
library_name: bertopic
pipeline_tag: text-classification
---
|
enip2473/testing
|
enip2473
| 2023-06-26T06:04:13Z | 0 | 0 | null |
[
"translation",
"ru",
"en",
"dataset:wmt19",
"license:apache-2.0",
"region:us"
] |
translation
| 2023-06-26T05:30:16Z |
---
language:
- ru
- en
tags:
- translation
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
- sacrebleu
---
# My first huggingface model
Hello this is test message.
|
justinhoang/Pyramids
|
justinhoang
| 2023-06-26T05:44:45Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-26T05:44:41Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: justinhoang/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RiturajB/RL_projects
|
RiturajB
| 2023-06-26T05:20:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T05:20:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.30 +/- 14.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Retrial9842/ppo-cleanrl-LunarLander-v2
|
Retrial9842
| 2023-06-26T05:18:56Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T04:26:01Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -139.40 +/- 97.19
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 200000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.9999
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Retrial9842/ppo-cleanrl-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
evatan/beetroot_wo_prior
|
evatan
| 2023-06-26T05:18:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T05:09:29Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks beetroot
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/beetroot_wo_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks beetroot using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
csukuangfj/sherpa-onnx-zipformer-small-en-2023-06-26
|
csukuangfj
| 2023-06-26T05:17:08Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-06-26T04:59:34Z |
---
license: apache-2.0
---
The torchscript model is from
https://huggingface.co/Zengwei/icefall-asr-librispeech-zipformer-small-2023-05-16
The training code is from
https://github.com/k2-fsa/icefall/pull/1058
|
Suchinthana/sinhala-gpt-neo
|
Suchinthana
| 2023-06-26T05:14:48Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"si",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-01-20T16:15:06Z |
---
license: mit
widget:
- text: කවියා නුමුහු කළ නුවණ
- text: පිරිසිදු පානීය ජලය
language:
- si
pipeline_tag: text-generation
---
### Fine tuned GPT Neo 125M
This model is fine tuned with a [Sinhala data set](https://github.com/TharukaCkasthuri/plagiarism_detection_dataset_sinhala) for Sinhala text generation.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='Suchinthana/sinhala-gpt-neo')
>>> generator("කවියා නුමුහු කළ නුවණ ", do_sample=True, max_length=500)
```
|
nolanaatama/tylrswftrvc490pchswlnwn
|
nolanaatama
| 2023-06-26T05:13:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T05:10:36Z |
---
license: creativeml-openrail-m
---
|
Suchinthana/Amenity-Hashtag-Classifier
|
Suchinthana
| 2023-06-26T05:11:58Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T16:46:07Z |
---
license: apache-2.0
widget:
- text: '#WetTogether'
- text: '#OutWithFamily'
- text: '#PartyOnWaves'
---
|
evatan/beetroot_w_prior
|
evatan
| 2023-06-26T05:06:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T02:38:19Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks beetroot
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - evatan/beetroot_w_prior
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks beetroot using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
draziert/Reinforce-cartpole-v1
|
draziert
| 2023-06-26T04:54:09Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T04:53:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JaminOne/output
|
JaminOne
| 2023-06-26T04:48:44Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:17:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: output
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4900
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
emailconverter/How-To-Export-Emails-From-Outlook-For-Mac-To-Outlook-For-Windows
|
emailconverter
| 2023-06-26T04:45:42Z | 0 | 0 | null |
[
"Convert OLM files",
"en",
"region:us"
] | null | 2023-06-26T04:30:15Z |
---
language:
- en
tags:
- Convert OLM files
---
<h1>How To Export Emails From Outlook For Mac To Outlook For Windows?</h1>
Are you also looking for an effective process to export emails from Outlook for Mac to Outlook for Windows? If that is the case, read this informative guide to get the most proven solution to transfer emails from Outlook for Mac to Outlook for Windows.
Outlook is one of the secure email clients that can be offered for both Mac and Windows. Both Outlook are same to each other, however, the difference lies in the file format for storing their mailbox data. On Mac, Outlook saves its mailbox data in an OLM file while PST in Windows is used to save the Outlook mailbox data. If you want to open emails from Outlook for Mac to Outlook for Windows, you need to use an <a href="https://forensiksoft.com/converter/olm-to-pst.html">OLM to PST converter</a> to convert Outlook Mac files compatible with the Outlook Windows platform.
<h2>Reason To Transfer Emails From Outlook For Mac To Outlook For Windows</h2>
<ul><li>When we receive an OLM file from your customer and want to access their mailbox data in Windows.</li>
<li>When a user wants to transfer their mailbox data to Outlook for Windows.</li>
<li>If you want to view projects or updates from the organization.</li></ul>
<h2>How To Export Emails From Outlook For Mac To Outlook For Windows?</h2>
It's not good but true that there are no direct solutions available. Therefore, we recommend you opt for the <B>4n6 <a href="https://forensiksoft.com/file-converter/olm.html">OLM Converter</a></B>. It is the best software to convert large OLM files in a single step without encountering any errors. This software development is rounded off with a powerful algorithm to get a secure output without worrying about a single information leak.
<ul><li>Install and Start OLM Converter on your Windows computer.</li>
<li>Choose OLM files and add them to the software interface.</li>
<li>From the various export options, tap on the PST option.</li>
<li>Finally, browse the desired location for the result and click "Export" to get the output immediately.</li></ul>
<h3>Why Do Professionals Always Rely On Automated Software?</h3>
<ul><li>It retained the same mailbox structure as the original. Also, the <a href="https://forensiksoft.com/file-converter/outlook-pst.html">PST converter</a> offers advanced security to get the original data unchanged.</li>
<li>This software is very easy to use without you having to acquire much technical knowledge.</li>
<li>It allows the <a href="https://forensiksoft.com/blog/convert-olm-to-csv/">convert OLM to CSV</a>, HTML, EML, PST, and MBOX and offers various options.</li>
<li>It also allows <a href="https://forensiksoft.com/blog/import-olm-to-thunderbird/">import OLM to Thunderbird</a> with the same application.</li>
<li>It also provides the process to <a href="https://forensiksoft.com/blog/extract-email-addresses-from-olm-file/">extract email addresses from OLM file</a>.</li></ul>
<h4>Wrapping Up</h4>
In this technical post, we have explained the completed process of exporting emails from Outlook for Mac to Outlook for Windows. You can follow the above process and also export email messages from Outlook for Mac to Windows with attachments. Now the question has been properly resolved here. If you have any ambiguity, you can contact our technical expert anytime, anywhere to solve all problems immediately. So why are you waiting? Try this tool today. It also offers a free demo version to convert the first 10 files from each folder to test the working performance and all other useful features. If you are satisfied with this software, you can buy the key to unlock the premium version and convert most files without any limitations. You can make this tool your own by making a one-time investment to use the services for life.
|
djifg/grow_classification_xlmr
|
djifg
| 2023-06-26T04:42:53Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:19:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_xlmr
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5685
- Accuracy: 0.9331
- F1: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2912 | 1.0 | 450 | 0.4021 | 0.9166 | 0.9159 |
| 0.0609 | 2.0 | 900 | 0.5478 | 0.9163 | 0.9155 |
| 0.0304 | 3.0 | 1350 | 0.5494 | 0.9273 | 0.9266 |
| 0.0154 | 4.0 | 1800 | 0.5599 | 0.9309 | 0.9301 |
| 0.0092 | 5.0 | 2250 | 0.5685 | 0.9331 | 0.9323 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
wilmerhenao/olinguito
|
wilmerhenao
| 2023-06-26T04:22:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-25T22:25:43Z |
This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned
A model similar to this has been talked about
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
You will need a 3090 or A100 to run it, unfortunately this current version won't work on a T4.
---
library_name: peft
license: apache-2.0
language:
- en
tags:
- Text Generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
NasimB/gpt2-3-og-concat-modified-aochild
|
NasimB
| 2023-06-26T04:20:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T23:51:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-3-og-concat-modified-aochild
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-3-og-concat-modified-aochild
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9917 | 0.24 | 500 | 5.0580 |
| 4.751 | 0.48 | 1000 | 4.6761 |
| 4.4491 | 0.72 | 1500 | 4.4474 |
| 4.2552 | 0.96 | 2000 | 4.3018 |
| 4.0564 | 1.21 | 2500 | 4.2130 |
| 3.9694 | 1.45 | 3000 | 4.1354 |
| 3.9064 | 1.69 | 3500 | 4.0597 |
| 3.8419 | 1.93 | 4000 | 3.9915 |
| 3.6722 | 2.17 | 4500 | 3.9682 |
| 3.6318 | 2.41 | 5000 | 3.9315 |
| 3.6106 | 2.65 | 5500 | 3.8886 |
| 3.5928 | 2.89 | 6000 | 3.8514 |
| 3.4548 | 3.13 | 6500 | 3.8612 |
| 3.3861 | 3.38 | 7000 | 3.8411 |
| 3.393 | 3.62 | 7500 | 3.8154 |
| 3.3954 | 3.86 | 8000 | 3.7894 |
| 3.2757 | 4.1 | 8500 | 3.8165 |
| 3.1711 | 4.34 | 9000 | 3.8133 |
| 3.196 | 4.58 | 9500 | 3.7968 |
| 3.1968 | 4.82 | 10000 | 3.7750 |
| 3.1316 | 5.06 | 10500 | 3.8042 |
| 2.9476 | 5.3 | 11000 | 3.8150 |
| 2.9825 | 5.54 | 11500 | 3.8057 |
| 2.9945 | 5.79 | 12000 | 3.7922 |
| 2.9682 | 6.03 | 12500 | 3.8095 |
| 2.7376 | 6.27 | 13000 | 3.8392 |
| 2.7689 | 6.51 | 13500 | 3.8374 |
| 2.78 | 6.75 | 14000 | 3.8313 |
| 2.7801 | 6.99 | 14500 | 3.8215 |
| 2.5564 | 7.23 | 15000 | 3.8731 |
| 2.5648 | 7.47 | 15500 | 3.8790 |
| 2.5779 | 7.71 | 16000 | 3.8779 |
| 2.5815 | 7.96 | 16500 | 3.8749 |
| 2.4329 | 8.2 | 17000 | 3.9075 |
| 2.4187 | 8.44 | 17500 | 3.9123 |
| 2.4313 | 8.68 | 18000 | 3.9145 |
| 2.4232 | 8.92 | 18500 | 3.9151 |
| 2.3723 | 9.16 | 19000 | 3.9246 |
| 2.3473 | 9.4 | 19500 | 3.9267 |
| 2.3464 | 9.64 | 20000 | 3.9275 |
| 2.3445 | 9.88 | 20500 | 3.9275 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
JTeam/MissionControl
|
JTeam
| 2023-06-26T03:26:58Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-26T03:12:34Z |
---
license: openrail
---
Mission Control voice model for usage with RVC V2, trained to 100 epochs
|
ardhies/chrislin
|
ardhies
| 2023-06-26T03:25:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-16T09:41:09Z |
---
license: creativeml-openrail-m
---
|
stevengrove/gpt4tools-vicuna-13b-lora
|
stevengrove
| 2023-06-26T03:10:16Z | 0 | 30 | null |
[
"license:mit",
"region:us"
] | null | 2023-04-23T08:53:56Z |
---
license: mit
---
# GPT4Tools: Teaching LLM to Use Tools via Self-instruction
[Lin Song](http://linsong.info/), [Yanwei Li](https://yanwei-li.com/), [Rui Yang](https://github.com/Yangr116), Sijie Zhao, [Yixiao Ge](https://geyixiao.com/), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
GPT4Tools is a centralized system that can control multiple visual foundation models.
It is based on Vicuna (LLaMA), and 71K self-built instruction data.
By analyzing the language content, GPT4Tools is capable of automatically deciding, controlling, and utilizing different visual foundation models, allowing the user to interact with images during a conversation.
With this approach, GPT4Tools provides a seamless and efficient solution to fulfill various image-related requirements in a conversation.
Different from previous work, we support users teach their own LLM to use tools with simple refinement via self-instruction and LoRA.
<a href='https://github.com/StevenGrove/GPT4Tools'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://huggingface.co/stevengrove/gpt4tools-vicuna-13b-lora'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [](https://youtu.be/Qrj94ibQIT8) [](https://arxiv.org/abs//2305.18752)
|
casque/vaeFtMse840000Ema_v100
|
casque
| 2023-06-26T03:08:57Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T03:06:17Z |
---
license: creativeml-openrail-m
---
|
GaussianTech/llama-7b-sft
|
GaussianTech
| 2023-06-26T03:05:31Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T02:16:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
bghira/pseudo-journey-v2
|
bghira
| 2023-06-26T03:03:57Z | 47 | 12 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-22T01:32:36Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- text-to-image
---
# Capabilities
This model is "adventure" and "fantasy" focused.
With certain inference configurations, it is capable of producing very high quality results.
This model functions better without negative prompts than most fine-tunes.
# Inference parameters
Diffusers should "Just Work" with the config in this repository.
For A1111 users,
Scheduler: DDIM, 15-50 steps
Generally acceptable resolutions:
- 768x768
- 1024x1024
- 1152x768
# Limitations
This model contains a heavily tuned text encoder that has lost many original Stable Diffusion 2.1 concepts
This model is even less reliable at producing real people than the base 2.1-v model is
Training data included only 768x768 downsampled 1:1 ratio images, all other aspects were discarded. Ergo, this model struggles with high resolution native generations.
This model may have "burnt" outputs at higher CFG.
# Checkpoints
This model contains multiple revisions:
`02b28ff` (latest/main checkpoint)
30000 steps (approx 4 epochs) with terminal SNR on 22k Midjourney 5.1 images plus 7200 real photographs as balance data with complete BLIP captions on all data. BS=4, LR=4e-7 to 1e-8
`6d3949c` (retrained from ptx0/pseudo-journey)
[retrained: based on ptx0/pseudo-journey @ 4000 steps from stable-diffusion-2-1 baseline on 3300 images] + 9500 steps on 22,400 images, polynomial learning rate scheduler, batch size 4, 64 gradient accumulations, FROZEN text encoder, 8bit ADAM, ZERO PLW (no regularization data), followed by 550 steps with unfrozen text encoder and constant LR 1e-8
`9135a79` (original ckpt test)
13000 steps: trained from ptx0/pseudo-journey, polynomial learning rate scheduler, batch size 3, text encoder, 8bit ADAM, ZERO PLW (no regularization data)
|
GaussianTech/baichuan-7b-sft
|
GaussianTech
| 2023-06-26T03:03:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T04:32:03Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
djifg/grow_classification_five_class
|
djifg
| 2023-06-26T02:38:48Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-14T00:39:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_five_class
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_five_class
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9699
- Accuracy: 0.8381
- F1: 0.8354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5017 | 1.0 | 450 | 0.8314 | 0.7754 | 0.7695 |
| 0.144 | 2.0 | 900 | 0.7651 | 0.8155 | 0.8121 |
| 0.0718 | 3.0 | 1350 | 0.8483 | 0.8331 | 0.8296 |
| 0.0421 | 4.0 | 1800 | 1.0276 | 0.8320 | 0.8290 |
| 0.026 | 5.0 | 2250 | 0.9699 | 0.8381 | 0.8354 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sigmareaver/flan-ul2-4bit-128g-gptq
|
sigmareaver
| 2023-06-26T02:29:18Z | 3 | 8 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:55:03Z |
---
language:
- en
- fr
- ro
- de
- multilingual
thumbnail: "url to a thumbnail used in social sharing"
license: apache-2.0
metrics:
- mmlu
---
# flan-ul2 4-bit 128-groupsize GPTQ
Quantized using qwopqwop200's GPTQ-for-Llama repo on the t5 branch.<br>
Original model can be found here: [Google/flan-ul2](https://huggingface.co/google/flan-ul2)
Quantization command:
```
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 python t5.py ../full-models/flan-ul2 wikitext2 --nsamples 256 --wbits 4 --act-order --groupsize 128 --save ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt
```
Benchmark command:
```
python t5.py ../full-models/flan-ul2 wikitext2 --load ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq2.pt --wbits 4 --groupsize 128 --benchmark --benchmark_mode mmlu
```
Results :
```
Average accuracy 0.289 - math
Average accuracy 0.562 - health
Average accuracy 0.416 - physics
Average accuracy 0.780 - business
Average accuracy 0.610 - biology
Average accuracy 0.446 - chemistry
Average accuracy 0.461 - computer science
Average accuracy 0.513 - economics
Average accuracy 0.538 - engineering
Average accuracy 0.455 - philosophy
Average accuracy 0.622 - other
Average accuracy 0.703 - history
Average accuracy 0.707 - geography
Average accuracy 0.718 - politics
Average accuracy 0.653 - psychology
Average accuracy 0.711 - culture
Average accuracy 0.447 - law
Average accuracy 0.416 - STEM
Average accuracy 0.501 - humanities
Average accuracy 0.643 - social sciences
Average accuracy 0.613 - other (business, health, misc.)
MMLU Average accuracy: 0.540
```
|
alikolling/dqn-SpaceInvadersNoFrameskip-v4
|
alikolling
| 2023-06-26T02:26:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T02:25:50Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 530.00 +/- 166.16
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alikolling -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alikolling -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alikolling
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
ecwk/hf-distilbert-imdb-mlm-cosine
|
ecwk
| 2023-06-26T02:24:58Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T01:47:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hf-distilbert-imdb-mlm-cosine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf-distilbert-imdb-mlm-cosine
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5054 | 1.0 | 383 | 2.3043 |
| 2.3885 | 2.0 | 766 | 2.2447 |
| 2.3418 | 3.0 | 1149 | 2.2319 |
| 2.3045 | 4.0 | 1532 | 2.1883 |
| 2.2772 | 5.0 | 1915 | 2.1893 |
| 2.2543 | 6.0 | 2298 | 2.1683 |
| 2.2308 | 7.0 | 2681 | 2.1454 |
| 2.2139 | 8.0 | 3064 | 2.1403 |
| 2.2008 | 9.0 | 3447 | 2.1165 |
| 2.1937 | 10.0 | 3830 | 2.1281 |
| 2.1778 | 11.0 | 4213 | 2.1189 |
| 2.1742 | 12.0 | 4596 | 2.1218 |
| 2.1611 | 13.0 | 4979 | 2.0996 |
| 2.1562 | 14.0 | 5362 | 2.0992 |
| 2.1508 | 15.0 | 5745 | 2.1013 |
| 2.1469 | 16.0 | 6128 | 2.0945 |
| 2.1437 | 17.0 | 6511 | 2.0899 |
| 2.1436 | 18.0 | 6894 | 2.0827 |
| 2.1443 | 19.0 | 7277 | 2.0935 |
| 2.1389 | 20.0 | 7660 | 2.0939 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joohwan/ggb2
|
joohwan
| 2023-06-26T02:05:39Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T01:30:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ggb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ggb2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2504
- Accuracy: 0.7867
- F1: 0.7902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7074 | 1.0 | 329 | 0.9372 | 0.6930 | 0.7034 |
| 0.2637 | 2.0 | 658 | 0.7453 | 0.7716 | 0.7691 |
| 0.1483 | 3.0 | 987 | 0.9178 | 0.7637 | 0.7687 |
| 0.1022 | 4.0 | 1316 | 1.1147 | 0.7665 | 0.7742 |
| 0.0695 | 5.0 | 1645 | 1.0453 | 0.7895 | 0.7941 |
| 0.0518 | 6.0 | 1974 | 0.9508 | 0.8185 | 0.8188 |
| 0.0414 | 7.0 | 2303 | 1.1806 | 0.7784 | 0.7831 |
| 0.0324 | 8.0 | 2632 | 1.1893 | 0.7947 | 0.7950 |
| 0.0272 | 9.0 | 2961 | 1.2167 | 0.7927 | 0.7955 |
| 0.0226 | 10.0 | 3290 | 1.2504 | 0.7867 | 0.7902 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
leboya/mynet
|
leboya
| 2023-06-26T01:54:49Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-06-26T01:52:03Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vnykr/poca-SoccerTwos-v1
|
vnykr
| 2023-06-26T01:52:59Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-06-26T01:11:36Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Find your model_id: vnykr/poca-SoccerTwos-v1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Joserzapata/distilhubert-finetuned-gtzan
|
Joserzapata
| 2023-06-26T01:48:03Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-06-25T14:52:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8818
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5851 | 1.0 | 113 | 1.7243 | 0.5 |
| 1.2937 | 2.0 | 226 | 1.2310 | 0.68 |
| 0.9718 | 3.0 | 339 | 0.8918 | 0.76 |
| 0.6613 | 4.0 | 452 | 0.6837 | 0.81 |
| 0.3693 | 5.0 | 565 | 0.6250 | 0.82 |
| 0.2991 | 6.0 | 678 | 0.5740 | 0.82 |
| 0.1381 | 7.0 | 791 | 0.5874 | 0.83 |
| 0.2047 | 8.0 | 904 | 0.5824 | 0.86 |
| 0.1192 | 9.0 | 1017 | 0.7106 | 0.83 |
| 0.0652 | 10.0 | 1130 | 0.6576 | 0.87 |
| 0.0105 | 11.0 | 1243 | 0.8236 | 0.84 |
| 0.0074 | 12.0 | 1356 | 0.7874 | 0.85 |
| 0.0064 | 13.0 | 1469 | 0.9066 | 0.84 |
| 0.0041 | 14.0 | 1582 | 0.8426 | 0.85 |
| 0.0038 | 15.0 | 1695 | 0.8676 | 0.84 |
| 0.0039 | 16.0 | 1808 | 0.8820 | 0.85 |
| 0.0036 | 17.0 | 1921 | 0.8818 | 0.85 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Zumaridi/wav2vec2-large-xls-r-sw-300m-tr-colab
|
Zumaridi
| 2023-06-26T01:46:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-25T23:55:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_8_0
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-sw-300m-tr-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_8_0
type: common_voice_8_0
config: sw
split: test[:400]
args: sw
metrics:
- name: Wer
type: wer
value: 0.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-sw-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_8_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5041
- Wer: 0.97
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5156 | 0.4 | 50 | 2.9989 | 1.0 |
| 2.8791 | 0.8 | 100 | 2.8756 | 1.0 |
| 2.8202 | 1.2 | 150 | 2.8188 | 1.0 |
| 2.7121 | 1.6 | 200 | 2.4545 | 1.0 |
| 1.8877 | 2.0 | 250 | 1.4347 | 1.0 |
| 0.9731 | 2.4 | 300 | 0.7983 | 0.995 |
| 0.659 | 2.8 | 350 | 0.6360 | 0.9925 |
| 0.4981 | 3.2 | 400 | 0.5633 | 0.9675 |
| 0.4125 | 3.6 | 450 | 0.5149 | 0.965 |
| 0.3815 | 4.0 | 500 | 0.5041 | 0.97 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jclynn/finetuning-sentiment-es-synthetic-train-orig-val
|
jclynn
| 2023-06-26T01:41:38Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T20:09:49Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-es-synthetic-train-orig-val
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-es-synthetic-train-orig-val
This model is a fine-tuned version of [jclynn/finetuning-sentiment-es-synthetic-train-orig-val](https://huggingface.co/jclynn/finetuning-sentiment-es-synthetic-train-orig-val) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8411
- Accuracy: 0.9014
- F1: 0.9293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
realmplay/Charybdis-v1.0-GPTQ
|
realmplay
| 2023-06-26T01:39:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-23T08:07:07Z |
<img src="https://media.discordapp.net/attachments/1093298155627491368/1122627585680093254/tyfvenom_A_mythical_and_futuristic_representation_of_Charybdis__18d006ca-45e0-46eb-a6d3-62d06432b4f1.png?width=905&height=905" alt="Image description" width="400" height="400">
# Charybdis v1.0 (GPTQ 4bit version for pulsar)
### A Groundbreaking LLM that redefines roleplaying with unparalleled coherence, 16k context support, and complete uncensorship.
### Experience epic, immersive narratives driven by advanced algorithms and state-of-the-art AI technology, without any limitations.
|
DexoXeck/Spongebob-Dialougue-RVC2
|
DexoXeck
| 2023-06-26T01:39:31Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2023-06-25T19:39:39Z |
---
license: cc-by-4.0
---
Made by SEP64 Productions. (Please credit SEP64's discord when model is used.)
A tiny more than an hour of dataset audio and 500 epochs, took like
3 or 4 weeks to render cuz I used google collab and didn't want
to pay for GPU.
Thank you for using my model!
Also, thanks MrAK2006 for converting my model into a zip that actually works LOL.
|
mnicamartins8/bert-base-uncased-with-misspellings-correction
|
mnicamartins8
| 2023-06-26T01:28:39Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T01:22:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-with-misspellings-correction
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-with-misspellings-correction
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2307
- Accuracy: 0.9023
- Precision: 0.9090
- Recall: 0.9023
- F1: 0.9045
- Balanced Acc: 0.8858
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mazeinmouse/Reinforce-cartpole
|
mazeinmouse
| 2023-06-26T01:25:16Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T01:12:51Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
stephansf/ppo-LunarLander-v2
|
stephansf
| 2023-06-26T01:21:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T01:20:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.47 +/- 16.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nomad-ai/ppo-PyramidsTraining
|
nomad-ai
| 2023-06-26T01:17:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-26T01:17:17Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ecwk/hf-distilbert-imdb-mlm
|
ecwk
| 2023-06-26T01:13:41Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-25T23:24:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: hf-distilbert-imdb-mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hf-distilbert-imdb-mlm
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 420
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5049 | 1.0 | 383 | 2.3056 |
| 2.3896 | 2.0 | 766 | 2.2460 |
| 2.3458 | 3.0 | 1149 | 2.2351 |
| 2.3097 | 4.0 | 1532 | 2.1917 |
| 2.2839 | 5.0 | 1915 | 2.1935 |
| 2.2611 | 6.0 | 2298 | 2.1741 |
| 2.2397 | 7.0 | 2681 | 2.1516 |
| 2.2234 | 8.0 | 3064 | 2.1464 |
| 2.2121 | 9.0 | 3447 | 2.1242 |
| 2.2041 | 10.0 | 3830 | 2.1361 |
| 2.1883 | 11.0 | 4213 | 2.1251 |
| 2.185 | 12.0 | 4596 | 2.1297 |
| 2.1712 | 13.0 | 4979 | 2.1062 |
| 2.1648 | 14.0 | 5362 | 2.1049 |
| 2.1587 | 15.0 | 5745 | 2.1066 |
| 2.1532 | 16.0 | 6128 | 2.0981 |
| 2.1472 | 17.0 | 6511 | 2.0926 |
| 2.1462 | 18.0 | 6894 | 2.0832 |
| 2.1437 | 19.0 | 7277 | 2.0937 |
| 2.1386 | 20.0 | 7660 | 2.0927 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
medmac01/moroccan-qa-falcon-7b
|
medmac01
| 2023-06-26T01:11:04Z | 15 | 0 |
transformers
|
[
"transformers",
"RefinedWebModel",
"text-generation",
"history",
"custom_code",
"en",
"fr",
"dataset:medmac01/moroccan_history_qa",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2023-06-11T15:11:33Z |
---
datasets:
- medmac01/moroccan_history_qa
language:
- en
- fr
library_name: transformers
tags:
- history
---
|
arshiahemmat/NewsPredictor
|
arshiahemmat
| 2023-06-26T00:56:02Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T00:01:28Z |
# Persian News Classification Model
This project presents a machine learning model trained on a dataset of over 25,000 Persian news articles. The model is designed to classify news articles into one of seven categories: Sport, Science, Culture, Politics, International, Economic, and Social.
## Dataset
The dataset used for this project consists of more than 25,000 Persian news articles. These articles are categorized into seven distinct categories, providing a diverse range of topics for the model to learn from. The categories are as follows:
1. Sport
2. Science
3. Culture
4. Politics
5. International
6. Economic
7. Social
## Model
The model has been trained on this extensive dataset, learning to identify and understand the nuances of each category. This allows it to accurately classify new, unseen Persian news articles into the appropriate category.
## Usage
To use this model, simply input a Persian news article and the model will output the predicted category. This can be useful for a variety of applications, such as news aggregation services, content recommendation systems, and more.
## Future Work
We plan to continuously improve and update this model, incorporating more data and refining the model's architecture to increase its accuracy and efficiency.
## Contributions
Contributions to this project are welcome. If you have suggestions or improvements, feel free to open an issue or submit a pull request.
|
Loguc/Reinforce-CartPole
|
Loguc
| 2023-06-26T00:55:51Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T00:55:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Wanlau/sovits-4.0_datealive
|
Wanlau
| 2023-06-26T00:55:35Z | 0 | 6 | null |
[
"audio-to-audio",
"region:us"
] |
audio-to-audio
| 2023-04-06T12:48:22Z |
---
pipeline_tag: audio-to-audio
---
# Singing Voice Conversion Models for *Date A Live*
 
[**English**](./README.md) | [**中文简体**](./README_zh_CN.md)
---
Using so-vits-svc-4.0 (SoftVC VITS Singing Voice Conversion)
[**so-vits-svc**](https://github.com/svc-develop-team/so-vits-svc)
 
SOVITS models for *Date A Live*, using voice data from *Date A Live: Rio Reincarnation*
These models are aiming to facilitate communication and learning. Engaging in illegal activities, is strictly prohibited.
Dictionary name is characters' ID. [**character list**](./character-list.txt)
An online demo for SOVITS model: [**SOVITS_DAL_ONLINE**](https://huggingface.co/spaces/Wanlau/sovits-4.0_datealive)
---
## Terms of Use
From *https://github.com/svc-develop-team/so-vits-svc*

---
## character list

 

00
士道
Shido

01
十香
Tohka

02
折纸
Origami

03
四糸乃
Yoshino

04
狂三
Kurumi

05
琴里
Kotori

06
凛祢
Rinne

07
四糸奈
Yoshinon

08
令音
Reine
 
09
神无月
Kannazuki
 
10
殿町
Tonomachi

11
珠惠
Tamae

12
日下部
Kusakabe

13
亚衣
Ai

14
麻衣
Mai

15
美衣
Mii

16
支配者
Ruler

21
鞠亚
Maria

22
耶俱矢
Kaguya

23
夕弦
Yuzuru

24
美九
Miku

25
真那
Mana

26
美纪惠
Mikie

27
鞠奈
Marina

28
凛绪
Rio
---
## Links
[**so-vits-svc**](https://github.com/svc-develop-team/so-vits-svc)
[**SOVITS_DAL_ONLINE**](https://huggingface.co/spaces/Wanlau/sovits-4.0_datealive)
[**RVC models for *Date A Live***](https://huggingface.co/Wanlau/RVC_datealive)
[**online TTS model for *Date A Live***](https://huggingface.co/spaces/hzrr/dal_audio_inference)
|
dean-r/ppo-LunarLander-v2-w1
|
dean-r
| 2023-06-26T00:39:45Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T23:22:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.39 +/- 20.24
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nomad-ai/ppo-SnowballTarget
|
nomad-ai
| 2023-06-26T00:36:28Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-26T00:36:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nomad-ai/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
gabrielZang/alpaca7B-lora-fine-tuning-with-test-data
|
gabrielZang
| 2023-06-25T23:26:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T23:26:52Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
heinjan/TI-mobilenetv3-imagenet-v2-v1
|
heinjan
| 2023-06-25T23:09:52Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T23:06:23Z |
---
pipeline_tag: image-classification
---
|
mnicamartins8/bert-base-uncased-without-corrections
|
mnicamartins8
| 2023-06-25T23:01:26Z | 138 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-25T21:07:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-without-corrections
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-without-corrections
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2283
- Accuracy: 0.9070
- Precision: 0.9114
- Recall: 0.9070
- F1: 0.9086
- Balanced Acc: 0.8857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
heinjan/TI-mobilenetv3-imagenet-v2-v2
|
heinjan
| 2023-06-25T22:59:51Z | 4 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"region:us"
] |
image-classification
| 2023-06-25T18:19:59Z |
---
pipeline_tag: image-classification
---
|
AoneOne/aoneone
|
AoneOne
| 2023-06-25T22:32:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-14T21:49:50Z |
---
license: creativeml-openrail-m
---
|
yuanding/bert-finetuned-ner
|
yuanding
| 2023-06-25T22:12:21Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-25T21:15:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9373139265630168
- name: Recall
type: recall
value: 0.9537192864355436
- name: F1
type: f1
value: 0.9454454454454455
- name: Accuracy
type: accuracy
value: 0.9873874139047507
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0879
- Precision: 0.9373
- Recall: 0.9537
- F1: 0.9454
- Accuracy: 0.9874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0216 | 1.0 | 1756 | 0.0975 | 0.9134 | 0.9357 | 0.9244 | 0.9815 |
| 0.0128 | 2.0 | 3512 | 0.0873 | 0.9296 | 0.9460 | 0.9377 | 0.9859 |
| 0.0097 | 3.0 | 5268 | 0.0821 | 0.9320 | 0.9498 | 0.9408 | 0.9859 |
| 0.0078 | 4.0 | 7024 | 0.0827 | 0.9350 | 0.9532 | 0.944 | 0.9870 |
| 0.0032 | 5.0 | 8780 | 0.0828 | 0.9301 | 0.9514 | 0.9406 | 0.9870 |
| 0.0017 | 6.0 | 10536 | 0.0879 | 0.9373 | 0.9537 | 0.9454 | 0.9874 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
renyulin/opt125m-imdb-sft-lora8bit
|
renyulin
| 2023-06-25T21:57:38Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-25T21:56:46Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
crlandsc/tiny-audio-diffusion-percussion
|
crlandsc
| 2023-06-25T21:55:33Z | 5 | 2 | null |
[
"audio",
"diffusion",
"waveform diffusion",
"audio diffusion",
"unet",
"region:us"
] | null | 2023-06-18T17:02:14Z |
---
tags:
- audio
- diffusion
- waveform diffusion
- audio diffusion
- unet
---
# Model Card for tiny-audio-diffusion-percussion
General percussion/drum model for tiny-audio-diffusion. Use with [tiny-audio-diffusion](https://github.com/crlandsc/tiny-audio-diffusion) repo to generate random drum samples of all types.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.