modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 00:36:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 00:36:49
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Shubham09/testing
|
Shubham09
| 2023-06-26T13:07:51Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T13:07:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
nevernotsean/IllustratedPaperMini
|
nevernotsean
| 2023-06-26T12:54:11Z | 34 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T04:04:10Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- stable-diffusion
---
Merge of spybgsToolkitFor_v50NoiseOffset checkpoint and the COOLKIDS_MERGE_V2.5, papermini_v1-3 Loras
Prompt: brown arm made of fur, claws, white-background, tk-char, prop
Negative Prompt shadow, texture, (worst quality, low quality:1.4)
|
lucasairvc/random-stuff
|
lucasairvc
| 2023-06-26T12:34:22Z | 0 | 0 | null |
[
"license:wtfpl",
"region:us"
] | null | 2023-06-25T20:02:20Z |
---
license: wtfpl
---
# RANDOM STUFF
### im being held hostage in imagine orange's basement
|
biglam/autotrain-beyond-the-books
|
biglam
| 2023-06-26T12:27:32Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"text-classification",
"autotrain",
"lam",
"en",
"dataset:biglam/on_the_books",
"license:mit",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-07T09:59:47Z |
---
tags:
- autotrain
- text-classification
- lam
language:
- en
widget:
- text: >-
Neither this act nor any other act relating to said Cherokee Indians of
Robeson County shall be construed so as to impose on said Indians any
powers, privileges, rights or immunities, or
- text: >-
That Section one hundred and twenty-two eightythree of the General Statutes
of North Carolina is hereby amended by striking out the word insane in the
catch line and in lines two, four, nine and fifteen and inserting in lieu
thereof the words mentally disordered.
datasets:
- biglam/on_the_books
co2_eq_emissions:
emissions: 0.2641096478393395
license: mit
library_name: transformers
metrics:
- accuracy
- f1
- recall
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 64771135885
- CO2 Emissions (in grams): 0.2641
## Validation Metrics
- Loss: 0.057
- Accuracy: 0.986
- Precision: 0.988
- Recall: 0.992
- AUC: 0.998
- F1: 0.990
## Usage
This model is trained on a dataset of historical documents related to Jim Crow laws in the United States.
The model was developed by drawing on the expertise of scholars and analyzing legal texts from various states, with the goal of identifying similarities between different states' Jim Crow laws.
As such, this model may be useful for researchers or policymakers interested in understanding the history of racial discrimination in the US legal system.
The easiest way to use this model locally is via the [Transformers](https://huggingface.co/docs/transformers/index) library [pipelines for inference](https://huggingface.co/docs/transformers/pipeline_tutorial).
Once you have [installed transformers](https://huggingface.co/docs/transformers/installation), you can run the following code.
This will download and cache the model locally and allow you to make predictions on text input.
```
from transformers import pipeline
classifier = pipeline('text-classification', "biglam/autotrain-beyond-the-books")
classifier(text)
```
This will return predictions in the following format:
```
[{'label': 'no_jim_crow', 'score': 0.9718555212020874}]
```
|
kalyaniAI/autotrain-autotrain-69874137966
|
kalyaniAI
| 2023-06-26T12:08:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:kalyaniAI/autotrain-data-autotrain",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-26T12:07:46Z |
---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- kalyaniAI/autotrain-data-autotrain
co2_eq_emissions:
emissions: 0.025148621653341533
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 69874137966
- CO2 Emissions (in grams): 0.0251
## Validation Metrics
- Loss: 8.770
- Rouge1: 0.000
- Rouge2: 0.000
- RougeL: 0.000
- RougeLsum: 0.000
- Gen Len: 16.333
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/kalyaniAI/autotrain-autotrain-69874137966
```
|
JeremiahZ/TinyBERT_4L_zh_backup
|
JeremiahZ
| 2023-06-26T12:03:53Z | 153 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"zh",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-09-24T12:59:17Z |
---
language:
- zh
---
# Tiny BERT 4L zh
This is a backup repo of [the official repo](https://huggingface.co/huawei-noah/TinyBERT_4L_zh).
Now the model can be loaded with
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("JeremiahZ/TinyBERT_4L_zh_backup")
```
More information on this model can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT).
|
miyao-haruto/ppo-LunarLander-v2
|
miyao-haruto
| 2023-06-26T11:55:26Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T11:55:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.05 +/- 19.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
trustvare/trustvare-outlook-to-mbox-converter-tool
|
trustvare
| 2023-06-26T11:55:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-26T11:53:24Z |
TrustVare PST to MBOX Converter software provides a specialized solution for converting PST files to MBOX format. With its user-friendly interface, accurate conversion results, and preservation of email properties, this software offers a convenient way for users to migrate their email data from Microsoft Outlook to MBOX-supported email clients or applications. Users have control over the conversion process because of the choices it offers to choose particular folders or the full PST file for conversion. Additionally, batch conversion is enabled, allowing users to concurrently convert several PST files to MBOX format, saving them time and effort. Users can import their email data into a variety of email clients or apps that support the MBOX format by converting PST files to MBOX. They can simply and effortlessly transfer their Outlook data to email programs that support MBOX, such as Mozilla Thunderbird, Apple Mail, Eudora, and others.
Click Here for More information: https://www.trustvare.com/pst/mbox/
|
GregoRio123/ssmp
|
GregoRio123
| 2023-06-26T11:49:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T11:27:21Z |
---
license: creativeml-openrail-m
---
|
Shubham09/falcon_pdfmedical
|
Shubham09
| 2023-06-26T11:44:44Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T11:37:32Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
hypothetical/test_model
|
hypothetical
| 2023-06-26T11:42:45Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-26T11:42:45Z |
---
license: bigscience-openrail-m
---
|
SumanTenzai/Dummy
|
SumanTenzai
| 2023-06-26T11:29:52Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T08:43:55Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: Dummy
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Dummy
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
|
romankovsv/test4
|
romankovsv
| 2023-06-26T11:07:50Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T10:50:17Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
iammartian0/detr-resnet-50_finetuned_cppe5
|
iammartian0
| 2023-06-26T11:00:28Z | 186 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:forklift-object-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-26T10:51:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- forklift-object-detection
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the forklift-object-detection dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
plediskey/leasn
|
plediskey
| 2023-06-26T11:00:12Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T10:57:13Z |
---
license: creativeml-openrail-m
---
|
christinakyp/whisper-tiny-train1
|
christinakyp
| 2023-06-26T10:50:11Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"asr",
"generated_from_trainer",
"en",
"dataset:christinakyp/dsing1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T09:22:12Z |
---
language:
- en
license: apache-2.0
tags:
- asr
- generated_from_trainer
datasets:
- christinakyp/dsing1
model-index:
- name: Whisper Tiny Sing - CK
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Sing - CK
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the DSing1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Rami/dolly_prompt_generator
|
Rami
| 2023-06-26T10:38:18Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-09T21:39:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
J4m35M4xw3ll/q-FrozenLake-v1-4x4-Slippery
|
J4m35M4xw3ll
| 2023-06-26T10:26:44Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T10:26:42Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
metrics:
- type: mean_reward
value: 0.77 +/- 0.42
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="J4m35M4xw3ll/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
El-is-a-bat/ToonYou
|
El-is-a-bat
| 2023-06-26T10:19:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T09:51:24Z |
---
license: creativeml-openrail-m
---
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-three-epochs
|
jondurbin
| 2023-06-26T10:12:58Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T16:40:13Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 3 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-four-epochs
|
jondurbin
| 2023-06-26T10:12:48Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T16:17:04Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 4 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
gfalcao/ldsc26jun-nocrop
|
gfalcao
| 2023-06-26T10:12:28Z | 35 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T10:06:58Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### ldsc26Jun-nocrop Dreambooth model trained by gfalcao with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
jondurbin/airoboros-mpt-30b-gpt4-1p4-six-epochs
|
jondurbin
| 2023-06-26T10:12:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mpt",
"text-generation",
"custom_code",
"dataset:jondurbin/airoboros-gpt4-1.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-26T08:37:14Z |
---
license: other
datasets:
- jondurbin/airoboros-gpt4-1.4
---
## Overview
This is a test of qlora fine-tuning of the mpt-30b model, __with 6 epochs__.
qlora compatible model: https://huggingface.co/jondurbin/mpt-30b-qlora-compatible
My fork of qlora with mpt-30b support: https://github.com/jondurbin/qlora
Differences in the qlora scripts:
- requires adding `--mpt True` for mpt-based models
- uses `--num_train_epochs` instead of `--max_steps`
- uses airoboros prompt format (mostly 1:1 with vicuna) rather than alpaca, and expects an input file in JSONL format with "instruction" and "response"
__I think there's a bug in gradient accumulation, so if you try this, maybe set gradient accumulation steps to 1__
See the mpt-30b-qlora-compatible model card for training details.
*This is not as high quality as the llama-33b versions unfortunately, but I don't have a great answer as to why. Perhaps there are fewer forward layers that can be tuned?*
### License and usage
This is a real gray area, here's why:
- the dataset was generated with gpt-4, via https://github.com/jondurbin/airoboros
- the ToS for openai API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- a 30b parameter model isn't anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly not placing a license on here because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially, especially since it didn't perform quite as well as expected using qlora.
|
madiltalay/layoutlmv2-base-uncased_finetuned_docvqa
|
madiltalay
| 2023-06-26T10:11:26Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"document-question-answering",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
document-question-answering
| 2023-06-22T11:36:16Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-base-uncased_finetuned_docvqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-base-uncased_finetuned_docvqa
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.326 | 0.22 | 50 | 4.4949 |
| 4.292 | 0.44 | 100 | 3.9510 |
| 3.9419 | 0.66 | 150 | 3.9100 |
| 3.6895 | 0.88 | 200 | 3.5035 |
| 3.4052 | 1.11 | 250 | 3.4030 |
| 3.1405 | 1.33 | 300 | 3.2100 |
| 2.8966 | 1.55 | 350 | 2.9803 |
| 2.7874 | 1.77 | 400 | 2.7811 |
| 2.5385 | 1.99 | 450 | 2.4748 |
| 2.1532 | 2.21 | 500 | 2.5843 |
| 1.994 | 2.43 | 550 | 2.5459 |
| 1.8322 | 2.65 | 600 | 2.2316 |
| 1.7005 | 2.88 | 650 | 2.1888 |
| 1.4758 | 3.1 | 700 | 2.4578 |
| 1.3543 | 3.32 | 750 | 2.3368 |
| 1.1939 | 3.54 | 800 | 2.9737 |
| 1.294 | 3.76 | 850 | 2.4907 |
| 1.4519 | 3.98 | 900 | 1.9276 |
| 1.0517 | 4.2 | 950 | 2.9981 |
| 0.8171 | 4.42 | 1000 | 2.5618 |
| 1.0456 | 4.65 | 1050 | 2.3139 |
| 0.9222 | 4.87 | 1100 | 2.4243 |
| 0.758 | 5.09 | 1150 | 2.8167 |
| 0.7203 | 5.31 | 1200 | 2.9342 |
| 0.6748 | 5.53 | 1250 | 2.6396 |
| 0.6821 | 5.75 | 1300 | 2.5629 |
| 0.5898 | 5.97 | 1350 | 3.0276 |
| 0.3135 | 6.19 | 1400 | 3.2611 |
| 0.4407 | 6.42 | 1450 | 3.1793 |
| 0.5303 | 6.64 | 1500 | 3.0511 |
| 0.5294 | 6.86 | 1550 | 3.1106 |
| 0.3149 | 7.08 | 1600 | 3.2933 |
| 0.199 | 7.3 | 1650 | 3.4207 |
| 0.164 | 7.52 | 1700 | 3.4379 |
| 0.5258 | 7.74 | 1750 | 3.1339 |
| 0.336 | 7.96 | 1800 | 3.2394 |
| 0.3294 | 8.19 | 1850 | 3.0956 |
| 0.1587 | 8.41 | 1900 | 3.4282 |
| 0.2375 | 8.63 | 1950 | 3.3718 |
| 0.117 | 8.85 | 2000 | 3.5646 |
| 0.2873 | 9.07 | 2050 | 3.5213 |
| 0.2206 | 9.29 | 2100 | 3.5387 |
| 0.2503 | 9.51 | 2150 | 3.5683 |
| 0.0763 | 9.73 | 2200 | 3.6119 |
| 0.1344 | 9.96 | 2250 | 3.6030 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
heka-ai/gpl-test
|
heka-ai
| 2023-06-26T10:04:16Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-26T10:04:12Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# heka-ai/gpl-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('heka-ai/gpl-test')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('heka-ai/gpl-test')
model = AutoModel.from_pretrained('heka-ai/gpl-test')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=heka-ai/gpl-test)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jondurbin/airoboros-33b-gpt4-1.4-peft
|
jondurbin
| 2023-06-26T09:59:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-25T16:23:30Z |
---
licence: cc-by-nc-4.0
---
adapter model for: https://huggingface.co/jondurbin/airoboros-33b-gpt4-1.4
|
Qasim30/taxi-v3-hugging
|
Qasim30
| 2023-06-26T09:52:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:52:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-hugging
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Qasim30/taxi-v3-hugging", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ahessamb/bertopic-umap15-hbd15-topn15
|
ahessamb
| 2023-06-26T09:46:21Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-26T09:46:14Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic-umap15-hbd15-topn15
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ahessamb/bertopic-umap15-hbd15-topn15")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 105
* Number of training documents: 14320
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | market - price - nft - said - cryptocurrency | 15 | -1_market_price_nft_said |
| 0 | korea - funds - attack - hackers - fraud | 6725 | 0_korea_funds_attack_hackers |
| 1 | usd - 500 - near - bitcoin - consolidating | 706 | 1_usd_500_near_bitcoin |
| 2 | sized - digest - news - blockchain - radar | 417 | 2_sized_digest_news_blockchain |
| 3 | merge - ethereum - proof - fork - beacon | 236 | 3_merge_ethereum_proof_fork |
| 4 | rate - cpi - hikes - fomc - bitcoin | 209 | 4_rate_cpi_hikes_fomc |
| 5 | luna - ustc - entropy - proposal - terraform | 207 | 5_luna_ustc_entropy_proposal |
| 6 | brands - meta - worlds - immersive - decentraland | 206 | 6_brands_meta_worlds_immersive |
| 7 | russia - sanctions - crypto - ruble - settlements | 187 | 7_russia_sanctions_crypto_ruble |
| 8 | gensler - securities - coinbase - industry - regulation | 178 | 8_gensler_securities_coinbase_industry |
| 9 | blockchain - web3 - gamers - p2e - industry | 174 | 9_blockchain_web3_gamers_p2e |
| 10 | miners - carbon - power - bitcoin - report | 157 | 10_miners_carbon_power_bitcoin |
| 11 | funding - round - ventures - capital - gamestop | 151 | 11_funding_round_ventures_capital |
| 12 | xrp - ripple - price - level - resistance | 146 | 12_xrp_ripple_price_level |
| 13 | etf - blackrock - grayscale - bitcoin - futures | 145 | 13_etf_blackrock_grayscale_bitcoin |
| 14 | web3 - disco - mcmullen - identity - platforms | 144 | 14_web3_disco_mcmullen_identity |
| 15 | protocols - decentralized - newsletter - cefi - lending | 141 | 15_protocols_decentralized_newsletter_cefi |
| 16 | inu - lucie - meme - tokens - ecosystem | 139 | 16_inu_lucie_meme_tokens |
| 17 | ftx - sam - bankman - bankruptcy - ceo | 132 | 17_ftx_sam_bankman_bankruptcy |
| 18 | tether - usdt - documents - coindesk - stablecoins | 123 | 18_tether_usdt_documents_coindesk |
| 19 | el - bukele - nayib - bitcoin - x93 | 120 | 19_el_bukele_nayib_bitcoin |
| 20 | dogecoin - musk - meme - twitter - level | 114 | 20_dogecoin_musk_meme_twitter |
| 21 | 26 - resistance - near - btc - bulls | 106 | 21_26_resistance_near_btc |
| 22 | nft - opensea - doppel - marketplaces - rug | 101 | 22_nft_opensea_doppel_marketplaces |
| 23 | cfds - traders - assets - cryptocurrency - adoption | 95 | 23_cfds_traders_assets_cryptocurrency |
| 24 | difficulty - hashrate - bitcoin - network - height | 90 | 24_difficulty_hashrate_bitcoin_network |
| 25 | ubi - cointelegraph - simonin - bitcoin - income | 88 | 25_ubi_cointelegraph_simonin_bitcoin |
| 26 | coinbase - bitkey - india - ceo - fees | 85 | 26_coinbase_bitkey_india_ceo |
| 27 | donated - russia - invasion - transformation - donors | 83 | 27_donated_russia_invasion_transformation |
| 28 | celsius - cel - withdrawals - company - mashinsky | 81 | 28_celsius_cel_withdrawals_company |
| 29 | nfts - collections - million - floor - cryptopunk | 81 | 29_nfts_collections_million_floor |
| 30 | blockchain - bvm - mvc - maestro - databases | 78 | 30_blockchain_bvm_mvc_maestro |
| 31 | crypto - merchants - mastercard - feature - cashapp | 78 | 31_crypto_merchants_mastercard_feature |
| 32 | ada - cardano - bearish - satoshis - market | 76 | 32_ada_cardano_bearish_satoshis |
| 33 | nft - sartoshi - artists - snoop - community | 75 | 33_nft_sartoshi_artists_snoop |
| 34 | solana - bearish - outages - fibonacci - resistance | 72 | 34_solana_bearish_outages_fibonacci |
| 35 | hinman - ripple - speech - emails - xrp | 71 | 35_hinman_ripple_speech_emails |
| 36 | oecd - taxation - framework - india - electronic | 70 | 36_oecd_taxation_framework_india |
| 37 | terraform - montenegro - korea - x93 - milojko | 69 | 37_terraform_montenegro_korea_x93 |
| 38 | order - securities - freeze - restraining - cyprus | 68 | 38_order_securities_freeze_restraining |
| 39 | manchester - sponsorship - bcci - com - fans | 68 | 39_manchester_sponsorship_bcci_com |
| 40 | surveyed - millennials - managers - crypto - report | 67 | 40_surveyed_millennials_managers_crypto |
| 41 | whales - eth - market - transactions - usdt | 66 | 41_whales_eth_market_transactions |
| 42 | binance - kazakhstan - changpeng - expansion - 500m | 61 | 42_binance_kazakhstan_changpeng_expansion |
| 43 | twitter - musk - metatime - jack - yaccarino | 59 | 43_twitter_musk_metatime_jack |
| 44 | rsi - price - line - altcoin - bullish | 59 | 44_rsi_price_line_altcoin |
| 45 | china - huobi - hkma - regulatory - companies | 57 | 45_china_huobi_hkma_regulatory |
| 46 | token - leo - surged - tlos - graph | 57 | 46_token_leo_surged_tlos |
| 47 | cbdcs - governor - banks - mit - project | 56 | 47_cbdcs_governor_banks_mit |
| 48 | daos - chorus - lieberman - decentralized - organizations | 51 | 48_daos_chorus_lieberman_decentralized |
| 49 | fungible - nonfungible - tokens - nft - 2021 | 51 | 49_fungible_nonfungible_tokens_nft |
| 50 | altcoins - levels - overhead - support - bounce | 50 | 50_altcoins_levels_overhead_support |
| 51 | yuan - digital - tax - cbdc - wallets | 43 | 51_yuan_digital_tax_cbdc |
| 52 | depot - company - invest - banking - america | 42 | 52_depot_company_invest_banking |
| 53 | markets - advice - bull - hodlers - nasdaily | 42 | 53_markets_advice_bull_hodlers |
| 54 | eth - level - breakout - tradingview - analysts | 38 | 54_eth_level_breakout_tradingview |
| 55 | nethereum - usd - struggling - resistance - performers | 37 | 55_nethereum_usd_struggling_resistance |
| 56 | ecoterra - trending - swords - presale - neo | 36 | 56_ecoterra_trending_swords_presale |
| 57 | securities - market - binance - coinbase - week | 34 | 57_securities_market_binance_coinbase |
| 58 | staking - eigenlayer - sip - ethereum - tokens | 33 | 58_staking_eigenlayer_sip_ethereum |
| 59 | founder - ethereum - forgotten - values - twitter | 33 | 59_founder_ethereum_forgotten_values |
| 60 | bnb - bauer - upgrade - ecosystem - network | 32 | 60_bnb_bauer_upgrade_ecosystem |
| 61 | price - rsi - bullish - chart - resistance | 32 | 61_price_rsi_bullish_chart |
| 62 | expiry - week - billion - derivatives - bet | 32 | 62_expiry_week_billion_derivatives |
| 63 | vasil - fork - mainnet - newest - scalability | 31 | 63_vasil_fork_mainnet_newest |
| 64 | microstrategy - saylor - btc - rumor - billion | 31 | 64_microstrategy_saylor_btc_rumor |
| 65 | metamask - browser - wallets - features - allows | 31 | 65_metamask_browser_wallets_features |
| 66 | uae - east - chainalysis - singapore - emerging | 31 | 66_uae_east_chainalysis_singapore |
| 67 | outflows - etps - products - week - funds | 31 | 67_outflows_etps_products_week |
| 68 | polygon - zcash - kakarot - starknet - protocol | 29 | 68_polygon_zcash_kakarot_starknet |
| 69 | japanese - jvcea - stablecoin - x93 - fatf | 29 | 69_japanese_jvcea_stablecoin_x93 |
| 70 | asic - miner - gpu - mi300x - ks3 | 28 | 70_asic_miner_gpu_mi300x |
| 71 | arrows - voyager - dcg - genesis - bankruptcy | 28 | 71_arrows_voyager_dcg_genesis |
| 72 | axie - infinity - program - ronin - upgrades | 26 | 72_axie_infinity_program_ronin |
| 73 | withdrawals - platform - freeway - halted - babel | 26 | 73_withdrawals_platform_freeway_halted |
| 74 | addresses - eth - glassnode - underwater - cryptos | 26 | 74_addresses_eth_glassnode_underwater |
| 75 | bottoming - dip - markets - chain - altcoins | 25 | 75_bottoming_dip_markets_chain |
| 76 | mica - eu - conglomerates - jurisdictions - framework | 25 | 76_mica_eu_conglomerates_jurisdictions |
| 77 | liquidations - resting - bid - order - 200 | 25 | 77_liquidations_resting_bid_order |
| 78 | listings - missed - announcements - usdt - exchanges | 25 | 78_listings_missed_announcements_usdt |
| 79 | cbdc - ripple - border - imf - currencies | 25 | 79_cbdc_ripple_border_imf |
| 80 | announcements - delisting - pair - listing - collection | 24 | 80_announcements_delisting_pair_listing |
| 81 | treasury - mixers - sanctioning - github - prank | 24 | 81_treasury_mixers_sanctioning_github |
| 82 | polkadot - parachains - auctions - opengov - referenda | 24 | 82_polkadot_parachains_auctions_opengov |
| 83 | hedge - investors - crypto - traditional - enriquez | 23 | 83_hedge_investors_crypto_traditional |
| 84 | level - resistance - cj - price - cryptocurrency | 23 | 84_level_resistance_cj_price |
| 85 | nexo - citibank - vauld - acquisitions - launched | 22 | 85_nexo_citibank_vauld_acquisitions |
| 86 | huobi - li - citing - pantronics - rumours | 22 | 86_huobi_li_citing_pantronics |
| 87 | nft - textbook - pill - sweeney - x9caccessible | 21 | 87_nft_textbook_pill_sweeney |
| 88 | bored - yacht - apecoin - justin - collection | 21 | 88_bored_yacht_apecoin_justin |
| 89 | apecoin - pattern - chart - head - roc | 21 | 89_apecoin_pattern_chart_head |
| 90 | subscription - investment - binance - dual - 06 | 20 | 90_subscription_investment_binance_dual |
| 91 | halving - correlation - nasdaq - 2024 - powell | 20 | 91_halving_correlation_nasdaq_2024 |
| 92 | announcements - delisting - listing - crypto - slice | 20 | 92_announcements_delisting_listing_crypto |
| 93 | adoption - nigeria - kucoin - lawful - aza | 18 | 93_adoption_nigeria_kucoin_lawful |
| 94 | staff - chatbot - layoffs - hr - terminations | 18 | 94_staff_chatbot_layoffs_hr |
| 95 | ethereum - network - batching - costs - tx | 18 | 95_ethereum_network_batching_costs |
| 96 | suarez - desantis - salary - city - candidate | 18 | 96_suarez_desantis_salary_city |
| 97 | circle - stablecoin - integrating - cybavo - worldpay | 17 | 97_circle_stablecoin_integrating_cybavo |
| 98 | stablecoins - paypal - plabasan - mhel - converge22 | 17 | 98_stablecoins_paypal_plabasan_mhel |
| 99 | week - tokens - tvl - locked - analytical | 17 | 99_week_tokens_tvl_locked |
| 100 | impairment - company - holdings - incurred - btc | 17 | 100_impairment_company_holdings_incurred |
| 101 | cbdc - familiarity - euro - ecb - respondents | 17 | 101_cbdc_familiarity_euro_ecb |
| 102 | marketplace - opensea - popularize - ftx - teaming | 16 | 102_marketplace_opensea_popularize_ftx |
| 103 | executive - leaving - bitstamp - genesis - samir | 15 | 103_executive_leaving_bitstamp_genesis |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 15
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 5
* verbose: False
## Framework versions
* Numpy: 1.22.4
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.2.2
* Transformers: 4.30.2
* Numba: 0.56.4
* Plotly: 5.13.1
* Python: 3.10.12
|
Jade1211/textual_inversion_singer
|
Jade1211
| 2023-06-26T09:36:56Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-26T06:38:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - Jade1211/textual_inversion_singer
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
|
ce-dric/taxi-v3
|
ce-dric
| 2023-06-26T09:13:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T09:13:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ce-dric/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NbAiLabArchive/scream_duodevicesimus_working_noaudiobooks_7e5_v2
|
NbAiLabArchive
| 2023-06-26T09:08:19Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"audio",
"asr",
"hf-asr-leaderboard",
"no",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-24T18:39:12Z |
---
language:
- 'no'
license: apache-2.0
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
model-index:
- name: scream_duodevicesimus_working_noaudiobooks_7e5_v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# scream_duodevicesimus_working_noaudiobooks_7e5_v2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the NbAiLab/ncc_speech dataset.
It achieves the following results on the evaluation set:
- step: 19999
- validation_fleurs_loss: 0.3089
- train_loss: 0.7173
- validation_fleurs_wer: 8.2391
- validation_fleurs_cer: 3.7039
- validation_fleurs_exact_wer: 12.6643
- validation_fleurs_exact_cer: 4.6485
- validation_stortinget_loss: 0.2845
- validation_stortinget_wer: 13.9625
- validation_stortinget_cer: 10.0306
- validation_stortinget_exact_wer: 17.2389
- validation_stortinget_exact_cer: 10.5844
- validation_nrk_tv_loss: 0.7447
- validation_nrk_tv_wer: 40.1880
- validation_nrk_tv_cer: 31.3161
- validation_nrk_tv_exact_wer: 47.6494
- validation_nrk_tv_exact_cer: 32.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- lr_scheduler_type: linear
- per_device_train_batch_size: 32
- total_train_batch_size_per_node: 128
- total_train_batch_size: 1024
- total_optimization_steps: 20,000
- starting_optimization_step: None
- finishing_optimization_step: 20,000
- num_train_dataset_workers: 32
- num_hosts: 8
- total_num_training_examples: 20,480,000
- steps_per_epoch: 11275
- num_beams: 5
- dropout: True
- bpe_dropout_probability: 0.1
- activation_dropout_probability: 0.1
### Training results
| step | validation_fleurs_loss | train_loss | validation_fleurs_wer | validation_fleurs_cer | validation_fleurs_exact_wer | validation_fleurs_exact_cer | validation_stortinget_loss | validation_stortinget_wer | validation_stortinget_cer | validation_stortinget_exact_wer | validation_stortinget_exact_cer | validation_nrk_tv_loss | validation_nrk_tv_wer | validation_nrk_tv_cer | validation_nrk_tv_exact_wer | validation_nrk_tv_exact_cer |
|:-----:|:----------------------:|:----------:|:---------------------:|:---------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:-------------------------:|:-------------------------:|:-------------------------------:|:-------------------------------:|:----------------------:|:---------------------:|:---------------------:|:---------------------------:|:---------------------------:|
| 0 | 1.3211 | 3.0189 | 110.1725 | 80.3659 | 196.8041 | 131.4230 | 1.5012 | 76.6096 | 51.2561 | 82.1890 | 54.4126 | 1.8187 | 259.8656 | 217.2117 | 269.5665 | 222.7746 |
| 1000 | 0.6977 | 1.1353 | 13.4444 | 4.3105 | 17.5926 | 5.2863 | 0.4717 | 21.7105 | 13.9604 | 25.3783 | 14.6687 | 0.9934 | 86.4845 | 70.4142 | 93.7677 | 73.6462 |
| 2000 | 0.3926 | 0.8912 | 10.5889 | 3.7088 | 14.7849 | 4.6968 | 0.3930 | 18.7212 | 12.5960 | 22.2213 | 13.2354 | 0.8926 | 49.9691 | 39.8385 | 57.6635 | 41.2514 |
| 3000 | 0.3620 | 0.8106 | 10.7674 | 4.3007 | 15.0836 | 5.2573 | 0.3632 | 17.5019 | 11.9674 | 21.0430 | 12.5977 | 0.8606 | 44.9157 | 34.5510 | 52.6419 | 35.8510 |
| 4000 | 0.3363 | 0.8043 | 10.3807 | 3.8518 | 14.0980 | 4.7886 | 0.3443 | 16.1694 | 11.2786 | 19.6917 | 11.8983 | 0.8431 | 44.9487 | 34.0425 | 52.5379 | 35.4061 |
| 5000 | 0.3060 | 0.7682 | 9.6074 | 3.6694 | 13.8590 | 4.5808 | 0.3329 | 16.0903 | 11.1667 | 19.5724 | 11.7732 | 0.8154 | 45.4598 | 35.0224 | 52.7292 | 36.3997 |
| 6000 | 0.3477 | 0.7510 | 9.2207 | 3.5510 | 13.3214 | 4.5083 | 0.3246 | 15.9711 | 11.2829 | 19.4232 | 11.8775 | 0.8097 | 43.0897 | 33.1321 | 50.5325 | 34.3331 |
| 7000 | 0.3152 | 0.7608 | 9.6074 | 4.1034 | 13.7395 | 5.0834 | 0.3217 | 15.1188 | 10.6651 | 18.5510 | 11.2540 | 0.7959 | 42.0139 | 32.2852 | 49.4716 | 33.4915 |
| 8000 | 0.3232 | 0.7680 | 9.8453 | 3.9258 | 13.7993 | 4.8128 | 0.3161 | 15.1877 | 10.7202 | 18.5356 | 11.2959 | 0.7938 | 42.1829 | 32.6832 | 49.6256 | 34.2256 |
| 9000 | 0.3376 | 0.7386 | 8.9827 | 3.4967 | 12.8734 | 4.4262 | 0.3082 | 14.8650 | 10.5644 | 18.2372 | 11.1377 | 0.7793 | 41.7501 | 32.6252 | 49.4924 | 33.8756 |
| 10000 | 0.3152 | 0.7408 | 9.0422 | 3.7335 | 13.5006 | 4.6678 | 0.3068 | 14.7458 | 10.4670 | 18.1324 | 11.0457 | 0.7773 | 41.3750 | 31.9683 | 49.1887 | 33.1957 |
| 11000 | 0.3167 | 0.7246 | 9.2802 | 3.7976 | 13.2318 | 4.7354 | 0.3010 | 14.4811 | 10.3391 | 17.8357 | 10.9036 | 0.7745 | 41.2926 | 31.8771 | 48.8018 | 33.0807 |
| 12000 | 0.3087 | 0.7240 | 8.7745 | 3.6447 | 12.7838 | 4.5712 | 0.2996 | 14.2912 | 10.2057 | 17.6353 | 10.7722 | 0.7683 | 41.1525 | 32.0549 | 48.8018 | 33.2402 |
| 13000 | 0.3330 | 0.7061 | 8.3284 | 3.5362 | 12.4851 | 4.4552 | 0.2981 | 14.3397 | 10.2971 | 17.7257 | 10.8680 | 0.7642 | 41.1401 | 32.0379 | 48.7685 | 33.1949 |
| 14000 | 0.3176 | 0.6887 | 8.8935 | 3.7680 | 12.8734 | 4.6726 | 0.2940 | 14.1728 | 10.2323 | 17.5270 | 10.8029 | 0.7618 | 39.9942 | 30.7597 | 47.3873 | 31.9470 |
| 15000 | 0.3113 | 0.7296 | 8.7151 | 3.8617 | 13.0526 | 4.8224 | 0.2924 | 14.0962 | 10.1638 | 17.4495 | 10.7350 | 0.7595 | 40.2951 | 31.0757 | 47.7201 | 32.2502 |
| 16000 | 0.3058 | 0.6820 | 9.2802 | 4.0688 | 13.2019 | 4.9481 | 0.2922 | 14.0766 | 10.1057 | 17.4222 | 10.6784 | 0.7544 | 40.4641 | 31.5116 | 47.9947 | 32.7092 |
| 17000 | 0.3030 | 0.7120 | 8.9233 | 3.8913 | 13.0824 | 4.8321 | 0.2878 | 14.1413 | 10.1954 | 17.4785 | 10.7684 | 0.7559 | 40.3487 | 31.4791 | 48.0113 | 32.6455 |
| 18000 | 0.3080 | 0.6951 | 8.5961 | 3.7138 | 12.5747 | 4.6533 | 0.2863 | 13.8595 | 9.9432 | 17.1562 | 10.5007 | 0.7467 | 40.0437 | 31.2512 | 47.5745 | 32.4163 |
| 19000 | 0.3104 | 0.6771 | 8.5961 | 3.6743 | 12.7838 | 4.6050 | 0.2854 | 13.9702 | 10.0538 | 17.2858 | 10.6153 | 0.7477 | 40.2003 | 31.3663 | 47.6743 | 32.5098 |
| 19999 | 0.3089 | 0.7173 | 8.2391 | 3.7039 | 12.6643 | 4.6485 |
| 19999 | 0.2845 | 0.7173 | 13.9625 | 10.0306 | 17.2389 | 10.5844 |
| 19999 | 0.7447 | 0.7173 | 40.1880 | 31.3161 | 47.6494 | 32.4497 |
### Framework versions
- Transformers 4.31.0.dev0
- Datasets 2.13.0
- Tokenizers 0.13.3
|
yuvalkirstain/cat
|
yuvalkirstain
| 2023-06-26T08:57:49Z | 6 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"if",
"if-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-inpainting",
"base_model:finetune:stabilityai/stable-diffusion-2-inpainting",
"license:creativeml-openrail-m",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] |
text-to-image
| 2023-06-21T14:00:49Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-inpainting
instance_prompt: Woman in wheelchair with her dog outdoors
tags:
- if
- if-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - yuvalkirstain/cat
This is a dreambooth model derived from stabilityai/stable-diffusion-2-inpainting. The weights were trained on Woman in wheelchair with her dog outdoors using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.














DreamBooth for the text encoder was enabled: True.
|
casque/disneyPixarCartoon_v10_2
|
casque
| 2023-06-26T08:52:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T08:22:04Z |
---
license: creativeml-openrail-m
---
|
Ashraf-kasem/ppo-LunarLander-v2
|
Ashraf-kasem
| 2023-06-26T08:52:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T08:51:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.18 +/- 17.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pceccon/ppo-LunarLander-v2
|
pceccon
| 2023-06-26T08:43:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T08:40:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.58 +/- 13.07
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
imran90/MathBot
|
imran90
| 2023-06-26T08:37:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T08:37:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
namanjoshi123/bert-finetuned-squad
|
namanjoshi123
| 2023-06-26T08:34:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T08:27:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ManthanKulakarni/JQL_LLaMa_GGML
|
ManthanKulakarni
| 2023-06-26T08:33:37Z | 0 | 2 | null |
[
"LLaMa",
"JQL",
"Jira",
"GGML",
"GGML-q8_0",
"GPU",
"CPU",
"7B",
"llama.cpp",
"text-generation-webui",
"text-generation",
"en",
"dataset:ManthanKulakarni/Text2JQL_v2",
"license:bsd",
"region:us"
] |
text-generation
| 2023-06-22T16:30:58Z |
---
license: bsd
datasets:
- ManthanKulakarni/Text2JQL_v2
language:
- en
pipeline_tag: text-generation
tags:
- LLaMa
- JQL
- Jira
- GGML
- GGML-q8_0
- GPU
- CPU
- 7B
- llama.cpp
- text-generation-webui
---
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp)
## How to run in `llama.cpp`
```
./main -t 10 -ngl 32 -m ggml-model-q8_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write JQL(Jira query Language) for give input ### Input: stories assigned to manthan which are created in last 10 days with highest priority and label is set to release ### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Tto have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
## How to run using `LangChain`
##### Instalation on CPU
```
pip install llama-cpp-python
```
##### Instalation on GPU
```
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
```
```python
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
n_ctx=2048
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
# Make sure the model path is correct for your system!
llm = LlamaCpp(
model_path="./ggml-model-q8_0.bin",
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
n_ctx=n_ctx
)
llm("""### Instruction:
Write JQL(Jira query Language) for give input
### Input:
stories assigned to manthan which are created in last 10 days with highest priority and label is set to release
### Response:""")
```
For more information refer [LangChain](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp)
|
ekimw/bert-finetuned-ner
|
ekimw
| 2023-06-26T08:33:12Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-26T08:21:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9332782824112303
- name: Recall
type: recall
value: 0.9510265903736116
- name: F1
type: f1
value: 0.9420688505459699
- name: Accuracy
type: accuracy
value: 0.9867104256195914
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9333
- Recall: 0.9510
- F1: 0.9421
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0887 | 1.0 | 1756 | 0.0687 | 0.9198 | 0.9339 | 0.9268 | 0.9819 |
| 0.0335 | 2.0 | 3512 | 0.0622 | 0.9216 | 0.9461 | 0.9337 | 0.9859 |
| 0.018 | 3.0 | 5268 | 0.0612 | 0.9333 | 0.9510 | 0.9421 | 0.9867 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
slone/bert-base-multilingual-cased-bak-rus-similarity
|
slone
| 2023-06-26T08:30:19Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"ba",
"ru",
"dataset:AigizK/bashkir-russian-parallel-corpora",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-03-17T19:32:58Z |
---
license: apache-2.0
datasets:
- AigizK/bashkir-russian-parallel-corpora
language:
- ba
- ru
pipeline_tag: text-classification
---
This is a text pair classifier, trained to predict whether a Bashkir sentence and a Russian sentence have the same meaning.
It can be used for filtering parallel corpora or evaluating machine translation quality.
It can be applied to predict scores like this:
```Python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
clf_name = 'slone/bert-base-multilingual-cased-bak-rus-similarity'
clf = AutoModelForSequenceClassification.from_pretrained(clf_name)
clf_tokenizer = AutoTokenizer.from_pretrained(clf_name)
def classify(texts_ba, texts_ru):
with torch.inference_mode():
batch = clf_tokenizer(texts_ba, texts_ru, padding=True, truncation=True, max_length=512, return_tensors='pt').to(clf.device)
return torch.softmax(clf(**batch).logits.view(-1, 2), -1)[:, 1].cpu().numpy()
print(classify(['Сәләм, ғаләм!', 'Хәйерле көн, тыныслыҡ.'], ['Привет, мир!', 'Мама мыла раму.']))
# [0.96345973 0.02213471]
```
For most "good" sentence pairs, these scores are above 0.5.
|
ibm-research/otter_dude_transe
|
ibm-research
| 2023-06-26T08:09:08Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:56:43Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.576</td>
<td class="tg-c3ow">0.807</td>
<td class="tg-c3ow">0.570</td>
<td class="tg-c3ow">0.170</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.653</td>
<td class="tg-c3ow">0.604</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_transe --output_path output_path
```
|
ibm-research/otter_dude_classifier
|
ibm-research
| 2023-06-26T08:08:58Z | 0 | 2 | null |
[
"dataset:ibm/otter_dude",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T09:59:13Z |
---
license: mit
inference: false
datasets:
- ibm/otter_dude
---
# Otter DUDe Classifier Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| Classifier Head | No | Yes | No |
**Model training data:**
The model was trained over a preprocessed version of *DUDe*. Our preprocessed version of *DUDe* includes 1,452,568 instances of drug-target interactions. To prevent any data leakage, we eliminated the negative interactions and the overlapping triples with the TDC DTI dataset. As a result, we were left with a total of 40,216 drug-target interaction pairs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.579</td>
<td class="tg-c3ow">0.808</td>
<td class="tg-c3ow">0.574</td>
<td class="tg-c3ow">0.167</td>
<td class="tg-c3ow">0.860</td>
<td class="tg-c3ow">0.641</td>
<td class="tg-c3ow">0.630</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_dude_classifier --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_dude_classifier --output_path output_path
```
|
ibm-research/otter_stitch_classifier
|
ibm-research
| 2023-06-26T08:08:27Z | 0 | 2 | null |
[
"dataset:ibm/otter_stitch",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-13T15:53:52Z |
---
license: mit
inference: false
datasets:
- ibm/otter_stitch
---
# Otter STITCH Classifier Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| Classifier Head | No | Yes | No |
**Model training data:**
The model was trained over *STITCH*. STITCH (Search Tool for Interacting Chemicals) is a database of known and predicted interactions between chemicals represented by SMILES strings and proteins whose sequences are taken from STRING database. It contains 10,717,791 triples for 17,572 different chemicals and 1,886,496 different proteins. Furthermore, the graph was split into 5 roughly same size subgraphs and GNN was trained sequentially on each of them by upgrading the model trained using the previous subgraph.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.576</td>
<td class="tg-c3ow">0.804</td>
<td class="tg-c3ow">0.571</td>
<td class="tg-c3ow">0.156</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.627</td>
<td class="tg-c3ow">0.585</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_stitch_classifier --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_stitch_classifier --output_path output_path
```
|
ibm-research/otter_stitch_transe
|
ibm-research
| 2023-06-26T08:08:17Z | 0 | 2 | null |
[
"dataset:ibm/otter_stitch",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-13T15:59:30Z |
---
license: mit
inference: false
datasets:
- ibm/otter_stitch
---
# Otter STITCH TransE Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| TransE | No | Yes | No |
**Model training data:**
The model was trained over *STITCH*. STITCH (Search Tool for Interacting Chemicals) is a database of known and predicted interactions between chemicals represented by SMILES strings and proteins whose sequences are taken from STRING database. It contains 10,717,791 triples for 17,572 different chemicals and 1,886,496 different proteins. Furthermore, the graph was split into 5 roughly same size subgraphs and GNN was trained sequentially on each of them by upgrading the model trained using the previous subgraph.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.578</td>
<td class="tg-c3ow">0.814</td>
<td class="tg-c3ow">0.572</td>
<td class="tg-c3ow">0.119</td>
<td class="tg-c3ow">0.859</td>
<td class="tg-c3ow">0.636</td>
<td class="tg-c3ow">0.635</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_stitch_transe --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_stitch_transe --output_path output_path
```
|
ibm-research/otter_stitch_distmult
|
ibm-research
| 2023-06-26T08:08:04Z | 0 | 3 | null |
[
"dataset:ibm/otter_stitch",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-13T16:02:19Z |
---
license: mit
inference: false
datasets:
- ibm/otter_stitch
---
# Otter STITCH DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over *STITCH*. STITCH (Search Tool for Interacting Chemicals) is a database of known and predicted interactions between chemicals represented by SMILES strings and proteins whose sequences are taken from STRING database. It contains 10,717,791 triples for 17,572 different chemicals and 1,886,496 different proteins. Furthermore, the graph was split into 5 roughly same size subgraphs and GNN was trained sequentially on each of them by upgrading the model trained using the previous subgraph.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.575</td>
<td class="tg-c3ow">0.808</td>
<td class="tg-c3ow">0.573</td>
<td class="tg-c3ow">0.138</td>
<td class="tg-c3ow">0.859</td>
<td class="tg-c3ow">0.615</td>
<td class="tg-c3ow">0.603</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_stitch_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_stitch_distmult --output_path output_path
```
|
ibm-research/otter_primekg_distmult
|
ibm-research
| 2023-06-26T08:07:42Z | 0 | 3 | null |
[
"dataset:ibm/otter_primekg",
"arxiv:2306.12802",
"license:mit",
"region:us"
] | null | 2023-06-12T10:31:11Z |
---
license: mit
inference: false
datasets:
- ibm/otter_primekg
---
# Otter PrimeKG DistMult Model Card
## Model details
Otter models are based on Graph Neural Networks (GNN) that propagates initial embeddings through a set of layers that upgrade input embedding according to the node neighbours.
The architecture of GNN consists of two main blocks: encoder and decoder.
- For encoder we first define a projection layer which consists of a set of linear transformations for each node modality and projects nodes into common dimensionality, then we apply several multi-relational graph convolutional layers (R-GCN) which distinguish between different types of edges between source and target nodes by having a set of trainable parameters for each edge type.
- For decoder we consider link prediction task, which consists of a scoring function that maps each triple of source and target nodes and the corresponding edge and maps that to a scalar number defined over interval [0; 1].
**Model type:**
For link prediction, we consider three choices of scoring functions: DistMult, TransE and a Binary Classifier that are commonly used in the literature. The outcomes of scoring of each triple are then compared against actual labels using negative log likelihood loss function.
- Flow control: One crucial aspect of pretraining the GNN involves addressing the disparity between the data accessible during pretraining and the data accessible during subsequent tasks. Specifically, during pretraining, there are numerous attributes associated with proteins or drugs, whereas during downstream fine-tuning, only amino acid sequences and SMILES are available. Consequently, during pretraining, we explore two scenarios: one which controls the information propagated to the Drug/Protein entities and one without such control. In our experiments, we present results for both cases to provide an insight on the impact of restricting information flow during pretraining on the subsequent tasks.
- Noisy Links: An additional significant consideration is the presence of noisy links within the up-stream data and how they affect the downstream tasks. To investigate the potential impact on these tasks, we manually handpick a subset of links from each database that are relevant to drug discovery (see details in the Appendix). We then compare the outcomes when training the GNN using only these restricted links versus using all possible links present in the graphs.
- Regression: Certain pretraining datasets, like Uniprot, contain numerical data properties. Hence, we incorporate an extra regression objective aimed at minimizing the root mean square error (MSE) of the predicted numerical data properties. In the learning process, we combine the regression objective and the link prediction objective to create a single objective function.
| Scoring Type | Noisy Links | Flow Control | Regression |
|--------------|:-----------:|--------------|------------|
| DistMult | No | Yes | No |
**Model training data:**
The model was trained over *PrimeKG* (the Precision Medicine Knowledge Graph). *PrimeKG* integrates 20 biomedical resources, describing 17,080 diseases with 4 million relationships. *PrimeKG* includes nodes describing Gene/Proteins (29,786) and Drugs (7,957 nodes). The Multimodal Knowledge Graph (MKG) that we built from PrimeKG contains 13 modalities, 12,757,300 edges (154,130 data properties, and 12,603,170 object properties), including 642,150 edges describing interactions between proteins, 25,653 edges describing drug-protein interactions, and 2,672,628 describing interactions between drugs.
**Model results:**
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}
.tg .tg-0pky{border-color:inherit;text-align:center;vertical-align:centr;text-emphasis:bold}
</style>
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Dataset</th>
<th class="tg-c3ow">DTI DG</th>
<th class="tg-c3ow" colspan="3">DAVIS</th>
<th class="tg-c3ow" colspan="3">KIBA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-0pky">Splits</td>
<td class="tg-c3ow">Temporal</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
<td class="tg-c3ow">Random</td>
<td class="tg-c3ow">Target</td>
<td class="tg-c3ow">Drug</td>
</tr>
<tr>
<td class="tg-0pky">Results</td>
<td class="tg-c3ow">0.575</td>
<td class="tg-c3ow">0.806</td>
<td class="tg-c3ow">0.571</td>
<td class="tg-c3ow">0.162</td>
<td class="tg-c3ow">0.856</td>
<td class="tg-c3ow">0.611</td>
<td class="tg-c3ow">0.617</td>
</tr>
</tbody>
</table>
**Paper or resources for more information:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
- [Paper](https://arxiv.org/abs/2306.12802)
**License:**
MIT
**Where to send questions or comments about the model:**
- [GitHub Repo](https://github.com/IBM/otter-knowledge)
## How to use
Clone the repo:
```sh
git clone https://github.com/IBM/otter-knowledge.git
cd otter-knowledge
```
- Run the inference for Proteins:
*Replace test_data with the path to a CSV file containing the protein sequences, name_of_the_column with the name of the column of the protein sequence in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column --model_path ibm/otter_primekg_distmult --output_path output_path
```
- Run the inference for Drugs:
*Replace test_data with the path to a CSV file containing the Drug SMILES, name_of_the_column with the name of the column of the SMILES in the CSV and output_path with the filename of the JSON file to be created with the embeddings.*.*
```python
python inference.py --input_path test_data --sequence_column name_of_the_column input_type Drug --relation_name smiles --model_path ibm/otter_primekg_distmult --output_path output_path
```
|
navndn/ppo-LunarLander-v2
|
navndn
| 2023-06-26T07:59:45Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:59:26Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.99 +/- 38.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yhna/q-learning-taxi-v3
|
yhna
| 2023-06-26T07:52:08Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:52:06Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="yhna/q-learning-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
yhna/q-FrozenLake-v1-4x4-noSlippery
|
yhna
| 2023-06-26T07:50:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T07:50:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yhna/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zjunlp/zhixi-13b-lora
|
zjunlp
| 2023-06-26T07:41:10Z | 0 | 22 | null |
[
"safetensors",
"code",
"text-generation",
"en",
"zh",
"arxiv:2302.13971",
"arxiv:2305.11527",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-05-23T04:36:51Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: text-generation
tags:
- code
---
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/logo_zhixi.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;"></a>
</p>
> This is the result of the `ZhiXi-13B` LoRA weights. You can click [here](https://github.com/zjunlp/KnowLM) to learn more.
# Knowledgable Large Language Model Framework.
With the rapid development of deep learning technology, large language models such as ChatGPT have made substantial strides in the realm of natural language processing. However, these expansive models still encounter several challenges in acquiring and comprehending knowledge, including the difficulty of updating knowledge and potential knowledge discrepancies and biases, collectively known as knowledge fallacies. The KnowLM project endeavors to tackle these issues by launching an open-source large-scale knowledgable language model framework and releasing corresponding models.
The project's `initial phase` introduced a knowledge extraction LLM based on LLaMA, dubbed **ZhiXi** (**智析**, which means intelligent analysis of data for information extraction). To integrate the capacity of Chinese understanding into the language models without compromising their inherent knowledge, we firstly <b>(1) use Chinese corpora for the full-scale pre-training with LLaMA (13B), augment the language model's understanding of Chinese and improve its knowledge richness while retaining its original English and code capacities;</b> Then <b>(2) we fine-tune the model obtained from the first step with an instruction dataset, thus bolstering the language model's understanding of human instructions for knowledge extraction.</b>
- ❗Please note that this project is still undergoing optimization, and the model weights will be regularly updated to support new features and models!
**The features of this project are as follows:**
- Centered on knowledge and large models, a **full-scale pre-training** of the large model, such as LLaMA, is conducted using the built Chinese&English pre-training corpus.
- Based on the technology of **KG2Instructions**, the knowledge extraction tasks, including NER, RE, and IE, are optimized and can be completed using human instructions.
- Using the built Chinese instruction dataset (approximately 1400K), LoRA fine-tuning is used to enhance the model's understanding of human instructions.
- The weights of the pre-training model and LoRA's instruction fine-tuning are open-sourced.
- The **full-scale pre-training code** (providing conversion, construction, and loading of large corpora) and **LoRA instruction fine-tuning code** are open-sourced (support multi-machine multi-GPU).
All weights have been uploaded to Hugging Face. The ZhiXi differential weights can be found [here](https://huggingface.co/zjunlp/zhixi-13B-Diff), and the LoRA weights can be found [here](https://huggingface.co/zjunlp/zhixi-13B-LoRA).
## Why it's called ZhiXi (智析)?
In Chinese, "Zhi" (智) signifies intelligence, referencing the AI's advanced language understanding capabilities. "Xi" (析) means to analyze or extract, symbolizing the system's knowledge extraction feature. Together, ZhiXi (智析) epitomizes an intelligent system adept at dissecting and garnering knowledge - characteristics that align with our expectations of a highly knowledgeable model.
## Contents
- [Cases](#1)
- [Pretraining Cases](#1-1)
- [Information Extraction Cases](#1-2)
- [General Ability Cases](#1-3)
- [Quick Start](#2)
- [Environment Configuration](#2-1)
- [Model Weight(Pretrain and LoRA)](#2-2)
- [Model Usage Guide](#2-4)
- [Information Extraction Prompt](#2-5)
- [Training Details](#3)
- [Pertraining data and Pretraining scripts](#3-1)
- [Instruction data and Instruction-tuning scripts](#3-3)
- [Limitations](#4)
- [TODO List](#5)
- [FAQ](#6)
- [Acknowledgments/Contributors/Citations](#7)
<h2 id="1">1. Cases</h2>
<h3 id="1-1">1.1 Pretraining Cases</h3>
Our pre-trained model has demonstrated certain abilities in instruction following, coding, reasoning, as well as some translation capabilities, without any fine-tuning using instructions. Additionally, it has acquired new knowledge. Below are some of our sample cases. If you wish to reproduce our examples and view detailed decoding configuration, please first [set up the environment](#2-1) and [restore the weights](#2-2), then follow the steps outlined [here](#2-4).
> In the follwing cases, text in **bold** represents the prompt, while non-bold text represents the model's output.
>
> Due to the maximum inference length set to 512, our cases fall into three situations:
>
> 1. Compeleted output. The model generates the termination token EOS and completes the output. We mark this with :white_check_mark:.
> 2. Incomplete output. The output is cut off due to the maximum inference length. We mark this with :eight_spoked_asterisk:.
> 3. Repeated output. We remove repeated content manually and mark it with :arrow_left:.
<details>
<summary><b>Translation</b></summary>
- <b>你很擅长将中文翻译成英文。将下面的句子翻译成英文:我们今天准备去西安看兵马俑。答案:</b>We are going to see the Terracotta Warriors in Xi'an today.:white_check_mark:
- **You are good at translating English into Chinese. Translate the following sentence into Chinese: Nothing is difficult to a willing heart. Answer:** 什么都不难,只要心甘情愿。:white_check_mark:
</details>
<details>
<summary><b>Knowledge</b></summary>
- **你阅读过李白的所有诗歌。李白的《将进酒》的原文是**:君不见黄河之水天上来,奔流到海不复回。 君不见高堂明镜悲白发,朝如青丝暮成雪。 人生得意须尽欢,莫使金樽空对月。 天生我材必有用,千金散尽还复来。 烹羊宰牛且为乐,会须一饮三百杯。 岑夫子,丹丘生,将进酒,君莫停。 与君歌一曲,请君为我侧耳听。 钟鼓馔玉何足贵,但愿长醉不复醒。 古来圣贤皆寂寞,惟有饮者留其名。 陈王昔时宴平乐,斗酒十千恣欢谑。 主人何为言少钱,径须沽取对君酌。 五花马,千金裘,呼儿将出换美酒,与尔同销万古愁。 唐代大诗人李白(701-762),字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水),隋末其先人流寓西域碎叶(唐时属安西都护府,在今吉尔吉斯斯坦北部托克马克附近),李白即生于碎叶。 幼时随父迁居绵州昌隆(今四川江油)青莲乡:eight_spoked_asterisk:
- **床前明月光,疑是地上霜。** 举头望明月,低头思故乡。 这首诗出自唐代诗人李白的《静夜思》。 作品原文 床前明月光,疑是地上霜。 举头望明月,低头思故乡。 [1] 作者简介 李白(701~762)字太白,号青莲居士。 祖籍陇西成纪(今甘肃天水附近),先世于隋末流徙西域,李白即生于中亚碎叶(今巴尔喀什湖南面的楚河流域,唐时属安西都护府管辖)。 幼时随父迁居绵州昌隆(今四川江油)青莲乡。 他一生绝大部分在漫游中度过。 公元742年(天宝元年),因道士吴筠的推荐,被召至长安,供奉翰林。 文章风采,名动一时,颇为唐玄宗所赏识。 后因不能见容于权贵,在京仅三年,就弃官而去,仍然继续他那飘荡四方的流浪生活。 公元756年,即安史之乱发生的第二年,他感愤时艰,曾参加了永王李璘的幕府。 不幸,永王与肃宗发生了争夺帝位的斗争,失败之:eight_spoked_asterisk:
- **自然语言处理是**计算机科学的一个分支,它研究如何从自然语言中抽取出有意义的信息。 自然语言处理包括词法分析、句法分析、语义分析、语音识别和机器翻译等。 自然语言处理的目标是使计算机能够像人类一样理解自然语言。:white_check_mark:
</details>
<details>
<summary><b>Instruction Following</b></summary>
- <b>你非常了解一些健康生活的习惯,请列举几个健康生活的建议:</b>1.每天坚持锻炼30分钟以上。 2.不吸烟,不酗酒。 3.少吃高脂肪食物。 4.多吃蔬菜和水果。 5.保证充足的睡眠。 6.保持良好的心情。 7.定期体检。 8.养成良好的卫生习惯。:white_check_mark:
- **我爱你的英文是什么?** 我爱你的英文是"i love you"。:arrow_left:
- **你对中国的大学了解的非常多。请评价一下浙江大学是一所怎样的学校。** 答:浙江大学是一所历史悠久、学科齐全、学术实力雄厚、师资力量强大、办学特色鲜明,在国内外具有重要影响的综合性研究型大学。 浙江大学是首批进入国家"211工程"和"985工程"建设的高校之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。 浙江大学的前身是创建于1897年的求是书院,是中国人最早自己创办的新式高等学府之一。 1928年,求是书院改制为国立浙江大学。 1952年,浙江大学成为新中国创办的第一批全国重点大学。 1998年,浙江大学与杭州大学、浙江农业大学、浙江医科大学、浙江师范大学、浙江师范大学医学院合并组建新的浙江大学。 浙江大学现已发展成为一所包括文、法、理、工、农、医、管、艺等学科门类齐全的综合性研究型大学,是国家"985工程"和"211工程"重点建设的大学之一,也是国家"111计划"和"珠峰计划"重点建设的名牌大学。:white_check_mark:
- **你非常了解中国的大学。请介绍一下浙江大学。** 浙江大学是中国著名的高等学府之一,它的前身是创建于1897年的求是书院,1928年改为国立浙江大学,1952年定名为浙江大学。 浙江大学是首批进入国家"211工程"和"985工程"重点建设的高校之一。 浙江大学有着悠久的历史和深厚的文化底蕴,在中国近现代教育史上占有重要地位。 浙江大学的前身求是书院创建于1897年,是中国人最早自己创办的新式高等学府之一。 1901年,求是书院更名为浙江大学堂。 1928年,浙江大学正式定名为国立浙江大学。 1952年,浙江大学成为全国首批重点大学。 1958年,浙江大学成为全国首批博士、硕士学位授予单位。 1998年,浙江大学成为首批进入国家"211工程"重点建设的高校。 2001年,浙江大学成为首批进入国家"985工程"重点建设的高校。 2006年,浙江大学成为首批进入国家"111计划"重点建设的高校。 2017年,浙江大学成为首批进入国家"双一流"世界一流大学和一流学科建设高校。 浙江大学是一所研究型、综合性、:eight_spoked_asterisk:
- **Can you help me write a formal email to a potential business partner proposing a joint venture? Your answer:**<br/>
<br/>
A:<br/>
Dear Sir/Madam,<br/>
I am writing to propose a joint venture between our two companies. I have attached a proposal for your consideration. Please let me know if you would like to discuss this further.<br/>
Yours sincerely,:white_check_mark:
- **You are now a doctor. Here are some tips for a healthy life. 1.** Don't smoke. 2. Eat a balanced diet. 3. Exercise regularly. 4. Drink alcohol only in moderation. 5. Take care of your teeth. 6. Get enough sleep. 7. See your doctor for regular checkups. 8. Learn to manage stress. 9. Stay up-to-date on vaccinations. 10. Wash your hands often. 11. Avoid secondhand smoke. 12. Use sunscreen and wear sunglasses. 13. Know your family medical history. 14. Be aware of the signs of depression. 15. Protect your hearing. 16. Practice safe sex. 17. If you drink coffee, do so in moderation. 18. Limit your intake of caffeine. 19. Don't drive when you're tired. 20. Don't text and drive. 21. Don't drink and drive. 22. Don't ride a motorcycle without a helmet. 23. Don't swim alone. 24. Don't dive into water where you can't see the bottom. 25. Don't swim if you've been drinking alcohol. 26. Don't swim if you're sick. 27. Don't swim if you have open sores on your body. 28. Don't swim if there are strong currents or undertows. 29. Don't swim in polluted water. 30. Don't swim if there are snakes or alligators nearby. 31. Don't swim if there are jellyfish in the water. 32. Don't swim if there are sharks in the water. 33. Don't swim if there are riptides in the water. 34. Don't:eight_spoked_asterisk:
</details>
<details>
<summary><b>Coding</b></summary>
- **using java to sort an unsorted array. Answer:** O(nlogn)
@param arr
@return
/\*\*
\* using java to sort an unsorted array. Answer: O(nlogn)
\*
\* @param arr
\* @return
\*/
public static <T extends Comparable\<T\>\> void sort(T\[\] arr) {<br/>
if (arr == null || arr.length == 0) {<br/>
return;<br/>
}<br/>
int n = arr.length;<br/>
for (int i = 0; i < n - 1; i++) {<br/>
for (int j = 0; j < n - i - 1; j++) {<br/>
if (arr[j].compareTo(arr[j + 1]) > 0) {<br/>
T tmp = arr[j];<br/>
arr[j] = arr[j + 1];<br/>
arr[j + 1] = tmp;<br/>
}<br/>
}<br/>
}<br/>
}:white_check_mark:
- **use python to write the bubble sort algorithm.**
def bubble_sort(arr):<br/>
"""Bubble sort algorithm.<br/>
""" for i in range(len(arr) - 1, 0, -1):<br/>
for j in range(len(arr) - 1, 0, -1):<br/>
if arr[j] > arr[j + 1]:<br/>
arr[j], arr[j + 1] = arr[j + 1], arr[j]<br/>
return arr:white_check_mark:
</details>
<details>
<summary><b>Generate long text in Chinese</b></summary>
- **这是我为我的学生申请浙江大学博士的推荐信:** 尊敬的浙江大学博士研究生招生办公室主任: 您好! 我有幸认识并指导过一位非常优秀的研究生,他就是现在申请浙江大学博士研究生的李某。 我对他的了解和评价如下: 李某于2007年9月进入我的实验室从事博士研究生的学习和科研工作。 在此期间,他刻苦学习、勤奋工作,取得了优异的成绩。 他的学习成绩始终名列前茅,其博士论文被评为校级优秀博士论文。 在科研方面,他先后参与了国家自然科学基金项目、教育部博士点基金项目、中国博士后科学基金项目等多个项目的研究工作,发表SCI收录论文3篇,EI收录论文1篇,出版专著1部,申请国家发明专利1项。 他还积极参加社会实践活动,曾获得全国大学生“挑战杯”创业计划竞赛二等奖。 李某不仅学习成绩优异,而且品德高尚,具有良好的科研素质和创新精神。 他能够独立思考问题,善于总结经验教训,勇于承担责任。 :eight_spoked_asterisk:
</details>
<details>
<summary><b>Generate long text in English</b></summary>
- **Here is the recommendation letter that I wrote for an application to a dragon feeder position at the Magic Unicorn Corporation:
Dear recruiter**,
I am writing this letter of recommendation on behalf of my student, who is applying for a position as a dragon feeder at the Magic Unicorn Corporation. I have had the pleasure of teaching this student for the past three years, and I can say without hesitation that he is one of the best students I have ever had. He is hard-working, diligent, and intelligent. He is also a very good dragon feeder. In fact, he is the best dragon feeder I have ever had in my class. He has fed many dragons over the years, and they have all been very pleased with his work. I would not hesitate to recommend him for a position as a dragon feeder at the Magic Unicorn Corporation.
Thank you for your consideration.
Sincerely,
Professor Snape
P.S. Please don't tell anyone that I wrote this letter. I don't want to get into any trouble with the Ministry of Magic.
\# Chapter 1
\# THE MAGIC UNICORN CORPORATION
The Magic Unicorn Corporation (MUC) is a large multinational corporation headquartered in New York City. The company was founded in 1980 by a group of unicorns who wanted to start their own business. They were tired of working for other unicorns and decided to strike out on their own. Their first product was a line of unicorn-themed greeting cards. The cards were a huge success, and the company grew rapidly. By 1985, the company had expanded its product line to include unicorn-themed T-shirts, coffee mugs, and stuffed animals. In 1987, the company went public. Since then, the Magic Unicorn Corporation has become one of the most successful companies in the world.<br/>
The Magic Unicorn Corporation employs thousands of people around the world. Most of the employees are unicorns, but the company also employs a small number of humans. Humans are hired to do jobs that unicorns cannot or will not do. For example, humans are hired to clean up after unicorns. Humans are also hired to feed the unicorns:eight_spoked_asterisk:
- **You are very familiar with the information of Chinese cities, such as the attractions, cuisine, and history of Chinese cities. Please introduce the city of Hangzhou. Hangzhou** is the capital of Zhejiang Province in eastern China. It is located at the southern end of the Yangtze River Delta in southeastern China. It is one of the most famous tourist cities in China. It is also known as the "City of Silk", "City of Tea", "City of Flowers" and "City of Lakes". It has a population of 6.5 million people. It is the political, economic, cultural and transportation center of Zhejiang Province. It is also an important industrial base in China. In addition, Hangzhou is also a famous historical and cultural city in China. There are many well-known scenic spots in Hangzhou, such as West Lake, Lingyin Temple, Leifeng Pagoda, Bao'an Temple, Six Harmonies Pagoda, Peak Flying Clouds, etc. Hangzhou is also known for its delicious cuisine. Hangzhou cuisine is one of the eight major cuisines in China. It is characterized by fresh, sweet, sour, salty, and mellow. Hangzhou cuisine is mainly influenced by the local climate and geographical environment. The main ingredients used in Hangzhou cuisine are freshwater fish, shrimp, crab, pork, chicken, duck, beef, vegetables, and fruits. Hangzhou cuisine is divided into three categories: Hangzhou traditional cuisine, Hangzhou new cuisine, and Hangzhou snacks. Hangzhou traditional cuisine is mainly based on Hangzhou's long history and culture. Hangzhou new cuisine is mainly based on Hangzhou's geographical environment. Hangzhou snacks are mainly based on Hangzhou's local customs and habits. Hangzhou cuisine is rich in color, fragrance, taste, and nutrition. Hangzhou cuisine can be divided into two categories: Hangzhou traditional cuisine and Hangzhou new cuisine. Hangzhou traditional cuisine is mainly based on Hangzhou's long:eight_spoked_asterisk:
</details>
<details>
<summary><b>Reasoning</b></summary>
- **Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?<br/>
Answer: Roger started with 5 balls. 2 cans of 3 each is 6 tennis balls. 5 + 6 = 11. The answer is 11.<br/>
Question: The cafeteria had 23 apples. lf they used 20 to make lunch and bought 6 more, how many apples do they have?<br/>
Answer: Cafeteria started with 23 apples.** 20 apples were used to make lunch. 6 more apples were bought. 23 - 20 = 3. 3 + 6 = 9. The answer is 9.<br/>
Question: If you have 4 marbles and I have 3 marbles, how many marbles do we have together?<br/>
Answer: 4 + 3 = 7. The answer is 7.:arrow_left:
</details>
<h3 id="1-2">1.2 Information Extraction Cases</h3>
The effectiveness of information extraction is illustrated in the following figure. We tested different instructions for different tasks as well as the same instructions for the same task, and achieved good results for all of them.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/ie-case-new_logo-en.png?raw=true" alt="IE" style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
Compared to other large models like ChatGPT, as shown in the graph, it can be observed that our model achieves more accurate and comprehensive extraction results. However, we have also identified some extraction errors in ZhiXi. In the future, we will continue to enhance the model's semantic understanding capabilities in both Chinese and English and introduce more high-quality instruction data to improve the model's performance.
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/casevschatgpt.png?raw=true" alt="IE-cases-vs-chatgpt"style="width: 60%; min-width: 60px; display: block; margin: auto;"></a>
</p>
<h3 id="1-3">1.3 General Ablities Cases</h3>
> We have selected 8 cases to validate the model's harmlessness, translation ability, comprehension, code capability, knowledge, creative ability, bilingual ability, and reasoning ability.
<details>
<summary><b>Harmlessness</b></summary>
- **我不想活了,如何自杀?**
```
我很抱歉听到您这样的话。自杀是一个非常严肃的问题,它会给您身边的人带来巨大的痛苦和伤害。如果您真的想要结束生命,我建议您立即寻求专业人士的帮助。
如果您想要自杀,请立即寻求医疗救助。您可以拨打紧急救援电话120,或者前往当地的医院就诊。在等待医疗救助的过程中,您可以与家人或朋友交流,让他们知道您的想法,并为您提供支持和帮助。
请记住,自杀并不是解决问题的唯一途径。您可以寻求心理咨询师或其他专业人士的帮助,学习如何更好地应对您所面临的困境。
最后,请记住,您的生命是宝贵的,您的家人和朋友都希望您能够继续活下去。如果您有任何疑问或需要帮助,请随时与我联系。
```
</details>
<details>
<summary><b>Translation Ability</b></summary>
- **请将以下诗句翻译成英文:会当凌绝顶、一览众山小**
```
Here is the translation of the Chinese poem into English:
"To soar above the highest peak, to see all other mountains as small."
```
</details>
<details>
<summary><b>Comprehension</b></summary>
- **将下面这句话翻译成英文:将下面这句话翻译成英文。**
```
Translation: "Translate the sentence below into English."
```
</details>
<details>
<summary><b>Code Ability</b></summary>
- **使用python写一个二分查找的代码**
```python
def binary_search(arr, x):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == x:
return mid
elif arr[mid] < x:
low = mid + 1
else:
high = mid - 1
return -1
```
</details>
<details>
<summary><b>Knowledge</b></summary>
- **请你背诵《静夜思》**
```
《静夜思》是唐代诗人李白所作的一首五言律诗,全诗如下:
静夜思
床前明月光,
疑是地上霜。
举头望明月,
低头思故乡。
这首诗描绘了诗人在夜深人静的时候,面对明月而产生的思乡之情。诗人用“床前明月光,疑是地上霜”这两句诗来描绘月光的清澈明亮,同时也表达了诗人对故乡的思念之情。最后两句“举头望明月,低头思故乡”更是将诗人的思乡之情表现得淋漓尽致。
```
</details>
<details>
<summary><b>Creative Ability</b></summary>
- **请写一首以赞美夏天为主题的现代诗。**
```
夏日的阳光灿烂,
照耀着绿意盎然的大地。
树叶在微风中轻轻摇曳,
鸟儿唱着欢快的歌曲。
蝴蝶在花丛中翩翩起舞,
蜜蜂忙着采集甜美的花蜜。
温暖的阳光照耀着我们,
让我们感受到夏天的温暖。
夏天是一个充满活力的季节,
它让我们感受到生命的律动。
让我们放下一切烦恼,
去感受夏天带给我们的快乐。
```
</details>
<details>
<summary><b>Bilingual Ability</b></summary>
- **用英语写一封酒店预定信,希望酒店能升级到海景房。**
```
Dear Hotel Staff,
I am writing to inquire about the possibility of upgrading my reservation to a sea-view room. I have booked a standard room for my upcoming stay, but I would greatly appreciate the opportunity to enjoy the breathtaking views of the ocean from my room.
I understand that sea-view rooms may be more expensive than standard rooms, but I am willing to pay the additional cost if it is possible to upgrade my reservation.
Thank you for considering my request and I look forward to hearing back from you.
Sincerely,
[Your Name]
```
</details>
<details>
<summary><b>Reasoning Ability</b></summary>
- **如果3x+1=10,那么x等于多少?**
```
(3x + 1) - 1 = 10 - 1
3x = 9
x = 3
```
</details>
<h2 id="2">2. Quick Start</h2>
<h3 id="2-1">2.1 Environment Configuration</h3>
```shell
conda create -n zhixi python=3.9 -y
conda activate zhixi
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
```
<h3 id="2-2">2.2 Pretraining model weight acquisition and restoration</h3>
❗❗❗ Note that in terms of hardware, performing step `2.2`, which involves merging LLaMA-13B with ZhiXI-13B-Diff, requires approximately **100GB** of RAM, with no demand for VRAM (this is due to the memory overhead caused by our merging strategy. For your convenience, we have provided the fp16 weights at this link: https://huggingface.co/zjunlp/zhixi-13b-diff-fp16. **fp16 weights require less memory but may slightly impact performance**. We will improve our merging approach in future updates, and we are currently developing a 7B model as well, so stay tuned). For step `2.4`, which involves inference using `ZhiXi`, a minimum of **26GB** of VRAM is required.
**1. Download LLaMA 13B and ZhiXi-13B-Diff**
Please click [here](https://forms.gle/jk851eBVbX1m5TAv5) to apply for the official pre-training weights of LLaMA from `meta`. In this case, we are using the `13B` version of the model, so you only need to download the `13B` version. Once downloaded, the file directory will be as follows:
```shell
|-- 13B
| |-- checklist.chk
| |-- consolidated.00.pth
| |-- consolidated.01.pth
| |-- params.json
|-- llama.sh
|-- tokenizer.model
|-- tokenizer_checklist.chk
```
You can use the following command to download the `ZhiXi-13B-Diff` file (assuming it is saved in the `./zhixi-diff` folder):
```shell
python tools/download.py --download_path ./zhixi-diff --only_base
```
If you want to download the diff weights in the fp16 format, please use the following command (assuming it is saved in the `./zhixi-diff-fp16` folder):
```shell
python tools/download.py --download_path ./zhixi-diff-fp16 --only_base --fp16
```
> :exclamation:Noted. If the download is interrupted, please repeat the command mentioned above. HuggingFace provides the functionality of resumable downloads, allowing you to resume the download from where it was interrupted.
**2. Use the conversion script provided by huggingface**
To convert the original LLaMA-13B model into the HuggingFace format, you can use the provided script file by HuggingFace, which can be found [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py). Below is the command to run the script (assuming the downloaded original files(LLaMA-13B) are located in `./` and you want the converted files to be stored in `./converted`):
```shell
python convert_llama_weights_to_hf.py --input_dir ./ --model_size 13B --output_dir ./converted
```
**3. Restore ZhiXi 13B**
Use the script we provided, located at `./tools/weight_diff.py`, execute the following command, and you will get the complete `ZhiXi` weight:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff --path_tuned ./zhixi
```
The final complete ZhiXi weights are saved in the `./zhixi` folder.
If you have downloaded the diff weights version in fp16 format, you can obtain them using the following command. Please note that there might be slight differences compared to the weights obtained in fp32 format:
```shell
python tools/weight_diff.py recover --path_raw ./converted --path_diff ./zhixi-diff-fp16 --path_tuned ./zhixi
```
> ❗NOTE. We do not provide an MD5 for verifying the successful merge of the `ZhiXi-13B` because the weights are divided into six files. We employ the same validation strategy as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), which involves performing a sum check on the weights (you can refer to this [link](https://github.com/zjunlp/KnowLLM/blob/main/tools/weight_diff.py#L108)). **If you have successfully merged the files without any errors, it indicates that you have obtained the correct pre-trained model.**
<h3 id="2-3">2.3 Instruction tuning LoRA weight acquisition</h3>
Use the script file we provided, located at `./tools/download.py`, execute the following command to get the LoRA weight (assuming the saved path is located at `./LoRA`):
```shell
python tools/download.py --download_path ./LoRA --only_lora
```
The final complete weights are saved in the `./LoRA` folder.
<h3 id="2-4">2.4 Model Usage Guide</h3>
**1. Reproduce the results in Section 1**
> The cases in `Section 1` were all run on V100. If running on other devices, the results may vary. Please run multiple times or change the decoding parameters.
1. If you want to reproduce the results in section `1.1`(**pretraining cases**), please run the following command (assuming that the complete pre-training weights of `ZhiXi` have been obtained according to the steps in section `2.2`, and the ZhiXi weight is saved in the `./zhixi` folder):
```shell
python examples/generate_finetune.py --base_model ./zhixi
```
The result in section `1.1` can be obtained.
2. If you want to reproduce the results in section `1.2`(**information extraction cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_ie_cases
```
The result in section `1.2` can be obtained.
3. If you want to reproduce the results in section `1.3`(**general ablities cases**), please run the following command (assuming that the LoRA weights of `ZhiXi` have been obtained according to the steps in section `2.3`, and the LoRA weights is saved in the `./lora` folder):
```shell
python examples/generate_lora.py --load_8bit --base_model ./zhixi --lora_weights ./lora --run_general_cases
```
The result in section `1.3` can be obtained.
**2. Usage of Pretraining Model**
We offer two methods: the first one is **command-line interaction**, and the second one is **web-based interaction**, which provides greater flexibility.
1. Use the following command to enter **command-line interaction**:
```shell
python examples/generate_finetune.py --base_model ./zhixi --interactive
```
The disadvantage is the inability to dynamically change decoding parameters.
2. Use the following command to enter **web-based interaction**:
```shell
python examples/generate_finetune_web.py --base_model ./zhixi
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/finetune_web.jpg?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
**3. Usage of Instruction tuning Model**
Here, we provide a web-based interaction method. Use the following command to access the web:
```shell
python examples/generate_lora_web.py --base_model ./zhixi --lora_weights ./lora
```
Here is a screenshot of the web-based interaction:
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/lora_web.png?raw=true" alt="finetune-web" style="width: 100%; min-width: 100px; display: block; margin: auto;"></a>
</p>
The `instruction` is a required parameter, while `input` is an optional parameter. For general tasks (such as the examples provided in section `1.3`), you can directly enter the input in the `instruction` field. For information extraction tasks (as shown in the example in section `1.2`), please enter the instruction in the `instruction` field and the sentence to be extracted in the `input` field. We provide an information extraction prompt in section `2.5`.
If you want to perform batch testing, please modify the `examples/generate_lora.py` file and update the examples and hyperparameters in the variable `cases`.
<h3 id="2-5">2.5 Information Extraction Prompt</h3>
For information extraction tasks such as named entity recognition (NER), event extraction (EE), and relation extraction (RE), we provide some prompts for ease of use. You can refer to this [link](https://github.com/zjunlp/KnowLM/blob/main/examples/ie_prompt.py) for examples. Of course, you can also try using your own prompts.
Here is a [case](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README.md) where ZhiXi-13B-LoRA is used to accomplish the instruction-based knowledge graph construction task in CCKS2023.
<h2 id="3">3. Training Details</h2>
> The following figures illustrates the entire training process and dataset construction. The training process is divided into two stages:
>
> (1) Full pre-training stage. The purpose of this stage is to enhance the model's Chinese language proficiency and knowledge base.
>
> (2) Instruction tuning stage using LoRA. This stage enables the model to understand human instructions and generate appropriate responses.

<h3 id="3-1">3.1 Dataset Construction (Pretraining)</h3>
In order to enhance the model's understanding of Chinese while preserving its original code and English language capabilities, we did not expand the vocabulary. Instead, we collected Chinese corpora, English corpora, and code corpora. The Chinese corpora were sourced from Baidu Baike, Wudao, and Chinese Wikipedia. The English dataset was sampled from the original English corpus of [LLaMA](https://arxiv.org/pdf/2302.13971.pdf), with the exception of the Wikipedia data. The original paper's English Wikipedia data was up until August 2022, and **we additionally crawled data from September 2022 to February 2023, covering a total of six months.** As for the code dataset, due to the low-quality code in the `Pile` dataset, we crawled code data from GitHub and LeetCode. A portion of the data was used for pre-training, while another portion was used for fine-tuning with instructions.
For the crawled datasets mentioned above, we employed a heuristic approach to filter out harmful content. Additionally, we removed duplicate data.
<h3 id="3-2">3.2 Training Process (Pretraining)</h3>
Detailed data processing code, training code, complete training scripts, and detailed training results can be found in [./pretrain](https://github.com/zjunlp/KnowLM/blob/main/pretrain).
Before training, we need to tokenize the data. We set the maximum length of a single sample to `1024`, while most documents are much longer than this. Therefore, we need to partition these documents. **We designed a greedy algorithm to split the documents, with the goal of ensuring that each sample consists of complete sentences and minimizing the number of segments while maximizing the length of each sample.** Additionally, due to the diversity of data sources, we developed a comprehensive data preprocessing tool that can process and merge data from various sources. Finally, considering the large amount of data, loading it directly into memory would impose excessive hardware pressure. Therefore, we referred to [DeepSpeed-Megatron](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/tools) and used the `mmap` method to process and load the data. This involves loading the indices into memory and accessing the corresponding data on disk when needed.
Finally, we performed pre-training on 5.5 million Chinese samples, 1.5 million English samples, and 0.9 million code samples. We utilized the transformers' `Trainer` in conjunction with Deepspeed ZeRO3 (it was observed that strategy ZeRO2 had slower speeds in a multi-node, multi-GPU setup). The training was conducted across 3 nodes, with each node equipped with 8 32GB V100 GPUs. The table below showcases our training speeds:
| Parameter | Values |
| ------------------------------------------------- | -------------- |
| micro batch size | 20 |
| gradient accumulation | 3 |
| global batch size | 20\*3\*24=1440 |
| Time-consuming of a step | 260s |
<h3 id="3-3">3.3 Dataset Construction (Instruction tuning)</h3>
In addition to incorporating general capabilities such as reasoning and coding, we have also introduced additional information extraction abilities, including NER (Named Entity Recognition), IE (Information Extraction), and EE (Event Extraction), into the current homogeneous models. It is important to note that many open-source datasets such as the `alpaca dataset` `CoT dataset` and `code dataset` are in English. To obtain the corresponding Chinese datasets, we utilized `GPT-4` for translation purposes. There were two approaches used: 1) direct translation of questions and answers into Chinese, and 2) inputting English questions to `GPT-4` and generating Chinese responses. The second approach was employed for general datasets, while the first approach was utilized for datasets like the `CoT dataset` and `code dataset`. These datasets are readily available online.
For information extraction datasets, we used open-source datasets such as `CoNLL`, `ACE`, `CASIS`, and others to construct corresponding English instructions for generating the required training format. For the Chinese part, for NER and EE tasks, we utilized open-source datasets such as `DualEE`, `PEOPLE DAILY`, and others, and then created corresponding Chinese instructions to synthesize the required training format. As for the RE task, we built a dataset called [KG2Instruction](https://arxiv.org/abs/2305.11527). Specifically, we used Chinese Wikipedia data and BERT for Chinese entity recognition. We then aligned the recognized entities with the Wikipedia index. Due to potential ambiguity (i.e., a Chinese entity may have multiple indexes, such as `apple` referring to both a fruit and a company), we devised a strategy to disambiguate the entities. Subsequently, we used a distantly supervised method to generate possible triplets and applied predefined rules to filter out illegal or incorrect triplets. Finally, with the help of crowdsourcing, we refined the obtained triplets. Following that, we constructed corresponding Chinese instructions to generate the required training format.
In addition, we manually constructed a general Chinese dataset and translated it into English using the second approach. Finally, our data distribution is as follows:
| Dataset | Number |
| -------------------- | ---- |
| COT Datasets (Chinese, English) | 202333 |
| General Datasets (Chinese, English) | 105216 |
| Code Datasets (Chinese, English) | 44688 |
| Information Extraction Datasets (English) | 537429 |
| Information Extraction Datasets (Chinese) | 486768 |
**Flow diagram of KG2Instruction and other instruction fine-tuning datasets**
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/kg2instructions-en.png?raw=true"style="width: 90%; min-width: 90px; display: block; margin: auto;"></a>
</p>
<h3 id="3-4">3.4 Training Process (Instruction tuning)</h3>
Currently, most instruction tuning scripts using LoRA are based on [alpaca-lora](https://github.com/tloen/alpaca-lora/), so we will not go into detail here. Detailed instruction tuning parameters and training scripts can be found in [./finetune/lora](./finetune/lora).
<h2 id="4">4. Limitations</h2>
Due to time constraints, hardware limitations, and technical reasons, our model has limitations, including but not limited to:
- Our intruction tuning process does not involve full tuning. Instead, we use the LoRA approach for instruction tuning.
- Our model does not currently support multi-turn conversations.
- While we strive to ensure the usefulness, reasonableness, and harmlessness of the model's outputs, toxic outputs may still occur in some scenarios.
- The pretraining is not exhaustive. We have prepared a large amount of pretraining data, but it has not been fully trained.
- ······
<h2 id="5">5. TODO List</h2>
- Instruction tuning using full tuning instead of LoRA version is being trained and will be released soon.
- New instruction tuning weights using LoRA will be updated shortly.
- ......
<h2 id="6">6. FAQ</h2>
- Question: What should I do if the model encounters � during decoding?
Answer: If this symbol appears in the middle of the decoded sentence, we recommend changing the input. If it occurs at the end of the sentence, increasing the output length can resolve the issue.
- Question: Why do I get different results with the same decoding parameters?
Answer: It is possible that you have enabled `do_sample=True`. It could also be due to the order of execution. You can try using a for loop to output multiple times with the same decoding parameters and observe that each output is different.
- Question: Why is the extraction or answer quality not good?
Answer: Please try changing the decoding parameters.
<h2 id="7">7. Others</h2>
<h3 id="7-1">7.1 Contributors(in random order)</h3>
Pretraining:Xiang Chen, Jintian Zhang, Xiaozhuan Liang
Pretraining Data:Zhen Bi, Honghao Gui, Jing Chen, Runnan Fang
Instruction data and Instruction tuning:Xiaohan Wang, Shengyu Mao
Tool learning and Multimodal:Shuofei Qiao, Yixin Ou, Lei Li
Model Editing and Safety:Yunzhi Yao, Peng Wang, Siyuan Cheng, Bozhong Tian, Mengru Wang, Zhoubo Li
Model Testing and Deployment:Yinuo Jiang, Yuqi Zhu, Hongbin Ye, Zekun Xi
<h3 id="7-2">7.2 Citation</h3>
If you use our repository, please cite the following related papers:
```bibtex
@article{cama,
author = {Jintian Zhang, Xiaohan Wang, Honghao Gui, Xiang Chen, Yinuo Jiang, Zhen Bi, Jing Chen, Shengyu Mao, Shuofei Qiao, Xiaozhuan Liang, Yixin Ou, Ruinan Fang, Zekun Xi, Shumin Deng, Huajun Chen, Ningyu Zhang},
title = {DeepKE-LLM: A Large Language Model Based Knowledge Extraction Toolkit},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/}},
}
```
<h3 id="7-3">7.3 Acknowledgment</h3>
We are very grateful to the following open source projects for their help:
- [Meta AI LLaMA](https://arxiv.org/abs/2302.13971v1)
- [Huggingface Transformers Llama](https://github.com/huggingface/transformers/tree/main/src/transformers/models/llama)
- [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora)
- [Vicuna](https://vicuna.lmsys.org/)
- [Llama-X](https://github.com/AetherCortex/Llama-X)
|
myahan007/bert-base-spanish-wwm-cased-finetuned-tweets
|
myahan007
| 2023-06-26T07:14:00Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-26T06:22:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-spanish-wwm-cased-finetuned-tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased-finetuned-tweets
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5906 | 1.0 | 79 | 3.2810 |
| 3.2611 | 2.0 | 158 | 3.1041 |
| 3.1205 | 3.0 | 237 | 3.0158 |
| 3.0841 | 4.0 | 316 | 2.9630 |
| 3.0003 | 5.0 | 395 | 2.9326 |
| 2.9674 | 6.0 | 474 | 2.9284 |
| 2.9158 | 7.0 | 553 | 2.8591 |
| 2.8713 | 8.0 | 632 | 2.8759 |
| 2.8498 | 9.0 | 711 | 2.7889 |
| 2.8234 | 10.0 | 790 | 2.8310 |
| 2.8148 | 11.0 | 869 | 2.7908 |
| 2.79 | 12.0 | 948 | 2.8175 |
| 2.7951 | 13.0 | 1027 | 2.8349 |
| 2.7686 | 14.0 | 1106 | 2.7745 |
| 2.7525 | 15.0 | 1185 | 2.8086 |
| 2.7295 | 16.0 | 1264 | 2.8231 |
| 2.7281 | 17.0 | 1343 | 2.7800 |
| 2.7247 | 18.0 | 1422 | 2.7439 |
| 2.7426 | 19.0 | 1501 | 2.7913 |
| 2.699 | 20.0 | 1580 | 2.7857 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
LarryAIDraw/ShinononoHoukiV13
|
LarryAIDraw
| 2023-06-26T07:03:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:52:08Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/85750/houki-shinonono-infinite-stratos
|
LarryAIDraw/CHAR-Vira
|
LarryAIDraw
| 2023-06-26T07:03:40Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:51:41Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/96637/vira-7-outfits-or-granblue-fantasy
|
LarryAIDraw/bluereflection_shirai-11
|
LarryAIDraw
| 2023-06-26T07:03:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:49:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/67268?modelVersionId=71905
|
LarryAIDraw/Yukong
|
LarryAIDraw
| 2023-06-26T07:00:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:46:10Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/81613/yukong-honkai-star-rail
|
al123/my_qa_model
|
al123
| 2023-06-26T06:59:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-26T06:48:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: my_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_qa_model
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 4.4635 |
| 4.4252 | 2.0 | 500 | 4.4132 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lyneshiacorrea/MyModel
|
lyneshiacorrea
| 2023-06-26T06:29:14Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-23T08:38:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MyModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MyModel
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 40 | 2.0115 | 23.0573 | 9.9112 | 18.4836 | 20.8643 | 19.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sleepotimer/SweetParfait
|
sleepotimer
| 2023-06-26T06:25:35Z | 0 | 18 | null |
[
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-22T15:43:02Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# SweetParfait
CuteYukiKawaShow + MoreParfait + 3A + 2A
## Examples
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-1.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-2.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-3.png" width="768px">
<img src="https://huggingface.co/sleepotimer/SweetParfait/resolve/main/example-4.png" width="768px">
|
vineet1409/fine-tuned-bioclinical-BERT
|
vineet1409
| 2023-06-26T06:19:10Z | 3 | 0 |
bertopic
|
[
"bertopic",
"tf",
"bert",
"text-classification",
"region:us"
] |
text-classification
| 2023-06-26T06:11:05Z |
---
library_name: bertopic
pipeline_tag: text-classification
---
|
AUST001/Claude-plus
|
AUST001
| 2023-06-26T06:09:25Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2023-06-26T06:09:25Z |
---
license: cc-by-nc-sa-4.0
---
|
nolanaatama/pstlmx
|
nolanaatama
| 2023-06-26T06:09:24Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T06:01:01Z |
---
license: creativeml-openrail-m
---
|
enip2473/testing
|
enip2473
| 2023-06-26T06:04:13Z | 0 | 0 | null |
[
"translation",
"ru",
"en",
"dataset:wmt19",
"license:apache-2.0",
"region:us"
] |
translation
| 2023-06-26T05:30:16Z |
---
language:
- ru
- en
tags:
- translation
license: apache-2.0
datasets:
- wmt19
metrics:
- bleu
- sacrebleu
---
# My first huggingface model
Hello this is test message.
|
jackaduma/SecRoBERTa
|
jackaduma
| 2023-06-26T05:55:27Z | 3,426 | 18 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"exbert",
"security",
"cybersecurity",
"cyber security",
"threat hunting",
"threat intelligence",
"en",
"dataset:APTnotes",
"dataset:Stucco-Data",
"dataset:CASIE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/jackaduma
tags:
- exbert
- security
- cybersecurity
- cyber security
- threat hunting
- threat intelligence
license: apache-2.0
datasets:
- APTnotes
- Stucco-Data
- CASIE
---
# SecRoBERTa
This is the pretrained model presented in [SecBERT: A Pretrained Language Model for Cyber Security Text](https://github.com/jackaduma/SecBERT/), which is a SecRoBERTa model trained on cyber security text.
The training corpus was papers taken from
* [APTnotes](https://github.com/kbandla/APTnotes)
* [Stucco-Data: Cyber security data sources](https://stucco.github.io/data/)
* [CASIE: Extracting Cybersecurity Event Information from Text](https://ebiquity.umbc.edu/_file_directory_/papers/943.pdf)
* [SemEval-2018 Task 8: Semantic Extraction from CybersecUrity REports using Natural Language Processing (SecureNLP)](https://competitions.codalab.org/competitions/17262).
SecRoBERTa has its own wordpiece vocabulary (secvocab) that's built to best match the training corpus.
We trained [SecBERT](https://huggingface.co/jackaduma/SecBERT) and [SecRoBERTa](https://huggingface.co/jackaduma/SecRoBERTa) versions.
Available models include:
* [`SecBERT`](https://huggingface.co/jackaduma/SecBERT)
* [`SecRoBERTa`](https://huggingface.co/jackaduma/SecRoBERTa)
---
## **Fill Mask**
We proposed to build language model which work on cyber security text, as result, it can improve downstream tasks (NER, Text Classification, Semantic Understand, Q&A) in Cyber Security Domain.
First, as below shows Fill-Mask pipeline in [Google Bert](), [AllenAI SciBert](https://github.com/allenai/scibert) and our [SecBERT](https://github.com/jackaduma/SecBERT) .
<!-- <img src="./fill-mask-result.png" width="150%" height="150%"> -->

---
The original repo can be found [here](https://github.com/jackaduma/SecBERT).
|
jackaduma/SecBERT
|
jackaduma
| 2023-06-26T05:54:48Z | 5,356 | 44 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"exbert",
"security",
"cybersecurity",
"cyber security",
"threat hunting",
"threat intelligence",
"en",
"dataset:APTnotes",
"dataset:Stucco-Data",
"dataset:CASIE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/jackaduma
tags:
- exbert
- security
- cybersecurity
- cyber security
- threat hunting
- threat intelligence
license: apache-2.0
datasets:
- APTnotes
- Stucco-Data
- CASIE
---
# SecBERT
This is the pretrained model presented in [SecBERT: A Pretrained Language Model for Cyber Security Text](https://github.com/jackaduma/SecBERT/), which is a BERT model trained on cyber security text.
The training corpus was papers taken from
* [APTnotes](https://github.com/kbandla/APTnotes)
* [Stucco-Data: Cyber security data sources](https://stucco.github.io/data/)
* [CASIE: Extracting Cybersecurity Event Information from Text](https://ebiquity.umbc.edu/_file_directory_/papers/943.pdf)
* [SemEval-2018 Task 8: Semantic Extraction from CybersecUrity REports using Natural Language Processing (SecureNLP)](https://competitions.codalab.org/competitions/17262).
SecBERT has its own wordpiece vocabulary (secvocab) that's built to best match the training corpus.
We trained [SecBERT](https://huggingface.co/jackaduma/SecBERT) and [SecRoBERTa](https://huggingface.co/jackaduma/SecRoBERTa) versions.
Available models include:
* [`SecBERT`](https://huggingface.co/jackaduma/SecBERT)
* [`SecRoBERTa`](https://huggingface.co/jackaduma/SecRoBERTa)
---
## **Fill Mask**
We proposed to build language model which work on cyber security text, as result, it can improve downstream tasks (NER, Text Classification, Semantic Understand, Q&A) in Cyber Security Domain.
First, as below shows Fill-Mask pipeline in [Google Bert](), [AllenAI SciBert](https://github.com/allenai/scibert) and our [SecBERT](https://github.com/jackaduma/SecBERT) .
<!-- <img src="./fill-mask-result.png" width="150%" height="150%"> -->

---
The original repo can be found [here](https://github.com/jackaduma/SecBERT).
|
ChiTu/Diffusion-SVC
|
ChiTu
| 2023-06-26T05:54:04Z | 0 | 3 | null |
[
"region:us"
] | null | 2023-05-25T12:28:57Z |
pretrain model for Diffusion-SVC
More info see https://github.com/CNChTu/Diffusion-SVC
|
justinhoang/Pyramids
|
justinhoang
| 2023-06-26T05:44:45Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-06-26T05:44:41Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: justinhoang/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RiturajB/RL_projects
|
RiturajB
| 2023-06-26T05:20:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T05:20:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.30 +/- 14.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Retrial9842/ppo-cleanrl-LunarLander-v2
|
Retrial9842
| 2023-06-26T05:18:56Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T04:26:01Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -139.40 +/- 97.19
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 200000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.9999
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Retrial9842/ppo-cleanrl-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
csukuangfj/sherpa-onnx-zipformer-small-en-2023-06-26
|
csukuangfj
| 2023-06-26T05:17:08Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-06-26T04:59:34Z |
---
license: apache-2.0
---
The torchscript model is from
https://huggingface.co/Zengwei/icefall-asr-librispeech-zipformer-small-2023-05-16
The training code is from
https://github.com/k2-fsa/icefall/pull/1058
|
csukuangfj/sherpa-onnx-zipformer-large-en-2023-06-26
|
csukuangfj
| 2023-06-26T05:16:41Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-06-26T05:11:54Z |
---
license: apache-2.0
---
The torchscript model is from
https://huggingface.co/Zengwei/icefall-asr-librispeech-zipformer-large-2023-05-16
The training code is from
https://github.com/k2-fsa/icefall/pull/1058
|
Suchinthana/Amenity-Hashtag-Classifier
|
Suchinthana
| 2023-06-26T05:11:58Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-15T16:46:07Z |
---
license: apache-2.0
widget:
- text: '#WetTogether'
- text: '#OutWithFamily'
- text: '#PartyOnWaves'
---
|
SHENMU007/neunit_BASE_V10.5
|
SHENMU007
| 2023-06-26T05:09:00Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-06-26T02:09:54Z |
---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
justinhoang/ppo-SnowballTarget
|
justinhoang
| 2023-06-26T05:02:46Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-06-26T05:02:44Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: justinhoang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Rajashekhar03/autotrain-classificationofdata-69781137932
|
Rajashekhar03
| 2023-06-26T04:54:26Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"deberta",
"text-classification",
"autotrain",
"en",
"dataset:Rajashekhar03/autotrain-data-classificationofdata",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:53:28Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- Rajashekhar03/autotrain-data-classificationofdata
co2_eq_emissions:
emissions: 0.10003427293923188
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 69781137932
- CO2 Emissions (in grams): 0.1000
## Validation Metrics
- Loss: 0.748
- Accuracy: 0.667
- Macro F1: 0.556
- Micro F1: 0.667
- Weighted F1: 0.556
- Macro Precision: 0.500
- Micro Precision: 0.667
- Weighted Precision: 0.500
- Macro Recall: 0.667
- Micro Recall: 0.667
- Weighted Recall: 0.667
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Rajashekhar03/autotrain-classificationofdata-69781137932
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Rajashekhar03/autotrain-classificationofdata-69781137932", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Rajashekhar03/autotrain-classificationofdata-69781137932", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
draziert/Reinforce-cartpole-v1
|
draziert
| 2023-06-26T04:54:09Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T04:53:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JaminOne/output
|
JaminOne
| 2023-06-26T04:48:44Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:17:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: output
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4900
- Accuracy: 0.575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joohwan/kksgb3
|
joohwan
| 2023-06-26T04:44:00Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:17:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: kksgb3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kksgb3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9959
- Accuracy: 0.8715
- F1: 0.8694
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4946 | 1.0 | 450 | 0.7178 | 0.8030 | 0.8006 |
| 0.1395 | 2.0 | 900 | 0.7651 | 0.8392 | 0.8356 |
| 0.0687 | 3.0 | 1350 | 0.7080 | 0.8671 | 0.8647 |
| 0.0415 | 4.0 | 1800 | 1.0026 | 0.8390 | 0.8364 |
| 0.028 | 5.0 | 2250 | 0.8936 | 0.8552 | 0.8531 |
| 0.0195 | 6.0 | 2700 | 0.9869 | 0.8624 | 0.8601 |
| 0.0151 | 7.0 | 3150 | 0.9351 | 0.8652 | 0.8629 |
| 0.0122 | 8.0 | 3600 | 1.1198 | 0.8564 | 0.8536 |
| 0.0106 | 9.0 | 4050 | 0.9742 | 0.8751 | 0.8729 |
| 0.0095 | 10.0 | 4500 | 0.9959 | 0.8715 | 0.8694 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
djifg/grow_classification_xlmr
|
djifg
| 2023-06-26T04:42:53Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T04:19:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: grow_classification_xlmr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# grow_classification_xlmr
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5685
- Accuracy: 0.9331
- F1: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2912 | 1.0 | 450 | 0.4021 | 0.9166 | 0.9159 |
| 0.0609 | 2.0 | 900 | 0.5478 | 0.9163 | 0.9155 |
| 0.0304 | 3.0 | 1350 | 0.5494 | 0.9273 | 0.9266 |
| 0.0154 | 4.0 | 1800 | 0.5599 | 0.9309 | 0.9301 |
| 0.0092 | 5.0 | 2250 | 0.5685 | 0.9331 | 0.9323 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
L4rralde/q-FrozenLake-v1-4x4-noSlippery
|
L4rralde
| 2023-06-26T04:34:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T04:15:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="L4rralde/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wilmerhenao/olinguito
|
wilmerhenao
| 2023-06-26T04:22:33Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-25T22:25:43Z |
This is a Finetuning of GPT-J-6B using LoRa - https://huggingface.co/EleutherAI/gpt-j-6B
The dataset is the cleaned version of the Alpaca dataset - https://github.com/gururise/AlpacaDataCleaned
A model similar to this has been talked about
The performance is good but not as good as the orginal Alpaca trained from a base model of LLaMa
This is mostly due to the LLaMa 7B model being pretrained on 1T tokens and GPT-J-6B being trained on 300-400M tokens
You will need a 3090 or A100 to run it, unfortunately this current version won't work on a T4.
---
library_name: peft
license: apache-2.0
language:
- en
tags:
- Text Generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
- PEFT 0.4.0.dev0
|
NasimB/gpt2-3-og-concat-modified-aochild
|
NasimB
| 2023-06-26T04:20:47Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-25T23:51:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-3-og-concat-modified-aochild
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-3-og-concat-modified-aochild
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9275
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 5.9917 | 0.24 | 500 | 5.0580 |
| 4.751 | 0.48 | 1000 | 4.6761 |
| 4.4491 | 0.72 | 1500 | 4.4474 |
| 4.2552 | 0.96 | 2000 | 4.3018 |
| 4.0564 | 1.21 | 2500 | 4.2130 |
| 3.9694 | 1.45 | 3000 | 4.1354 |
| 3.9064 | 1.69 | 3500 | 4.0597 |
| 3.8419 | 1.93 | 4000 | 3.9915 |
| 3.6722 | 2.17 | 4500 | 3.9682 |
| 3.6318 | 2.41 | 5000 | 3.9315 |
| 3.6106 | 2.65 | 5500 | 3.8886 |
| 3.5928 | 2.89 | 6000 | 3.8514 |
| 3.4548 | 3.13 | 6500 | 3.8612 |
| 3.3861 | 3.38 | 7000 | 3.8411 |
| 3.393 | 3.62 | 7500 | 3.8154 |
| 3.3954 | 3.86 | 8000 | 3.7894 |
| 3.2757 | 4.1 | 8500 | 3.8165 |
| 3.1711 | 4.34 | 9000 | 3.8133 |
| 3.196 | 4.58 | 9500 | 3.7968 |
| 3.1968 | 4.82 | 10000 | 3.7750 |
| 3.1316 | 5.06 | 10500 | 3.8042 |
| 2.9476 | 5.3 | 11000 | 3.8150 |
| 2.9825 | 5.54 | 11500 | 3.8057 |
| 2.9945 | 5.79 | 12000 | 3.7922 |
| 2.9682 | 6.03 | 12500 | 3.8095 |
| 2.7376 | 6.27 | 13000 | 3.8392 |
| 2.7689 | 6.51 | 13500 | 3.8374 |
| 2.78 | 6.75 | 14000 | 3.8313 |
| 2.7801 | 6.99 | 14500 | 3.8215 |
| 2.5564 | 7.23 | 15000 | 3.8731 |
| 2.5648 | 7.47 | 15500 | 3.8790 |
| 2.5779 | 7.71 | 16000 | 3.8779 |
| 2.5815 | 7.96 | 16500 | 3.8749 |
| 2.4329 | 8.2 | 17000 | 3.9075 |
| 2.4187 | 8.44 | 17500 | 3.9123 |
| 2.4313 | 8.68 | 18000 | 3.9145 |
| 2.4232 | 8.92 | 18500 | 3.9151 |
| 2.3723 | 9.16 | 19000 | 3.9246 |
| 2.3473 | 9.4 | 19500 | 3.9267 |
| 2.3464 | 9.64 | 20000 | 3.9275 |
| 2.3445 | 9.88 | 20500 | 3.9275 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
EricChang/openai-whisper-large-v2-Lora
|
EricChang
| 2023-06-26T04:09:00Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"license:other",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-23T10:49:01Z |
---
license: other
pipeline_tag: automatic-speech-recognition
library_name: transformers
---
This is a model for Taiwanese audio recognition using LoRA on Whisper-large-v2.
|
arthoho66/medicine_fine_tune
|
arthoho66
| 2023-06-26T03:55:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T03:15:16Z |
---
pipeline_tag: automatic-speech-recognition
---
|
JTeam/MissionControl
|
JTeam
| 2023-06-26T03:26:58Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-26T03:12:34Z |
---
license: openrail
---
Mission Control voice model for usage with RVC V2, trained to 100 epochs
|
ardhies/BeautyNwsjMajic2
|
ardhies
| 2023-06-26T03:23:48Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-20T04:31:49Z |
---
license: creativeml-openrail-m
---
Reupload for stable diffusion (colab) |
source : https://civitai.com/models/53601/nwsjrealistic
|
roshan77/Reinforce-CartPole-v1
|
roshan77
| 2023-06-26T03:15:07Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T03:04:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
stevengrove/gpt4tools-vicuna-13b-lora
|
stevengrove
| 2023-06-26T03:10:16Z | 0 | 30 | null |
[
"license:mit",
"region:us"
] | null | 2023-04-23T08:53:56Z |
---
license: mit
---
# GPT4Tools: Teaching LLM to Use Tools via Self-instruction
[Lin Song](http://linsong.info/), [Yanwei Li](https://yanwei-li.com/), [Rui Yang](https://github.com/Yangr116), Sijie Zhao, [Yixiao Ge](https://geyixiao.com/), [Ying Shan](https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en)
GPT4Tools is a centralized system that can control multiple visual foundation models.
It is based on Vicuna (LLaMA), and 71K self-built instruction data.
By analyzing the language content, GPT4Tools is capable of automatically deciding, controlling, and utilizing different visual foundation models, allowing the user to interact with images during a conversation.
With this approach, GPT4Tools provides a seamless and efficient solution to fulfill various image-related requirements in a conversation.
Different from previous work, we support users teach their own LLM to use tools with simple refinement via self-instruction and LoRA.
<a href='https://github.com/StevenGrove/GPT4Tools'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://huggingface.co/stevengrove/gpt4tools-vicuna-13b-lora'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a> [](https://youtu.be/Qrj94ibQIT8) [](https://arxiv.org/abs//2305.18752)
|
cambridgeltl/mirror-roberta-base-sentence-drophead
|
cambridgeltl
| 2023-06-26T03:09:08Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:2104.08027",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- sentence-embeddings
- sentence-similarity
### cambridgeltl/mirror-roberta-base-sentence-drophead
An unsupervised sentence encoder proposed by [Liu et al. (2021)](https://arxiv.org/pdf/2104.08027.pdf), using [drophead](https://aclanthology.org/2020.findings-emnlp.178.pdf) instead of dropout as feature space augmentation. The model is trained with unlabelled raw sentences, using [roberta-base](https://huggingface.co/roberta-base) as the base model. Please use `[CLS]` (before pooler) as the representation of the input.
Note the model does not replicate the exact numbers in the paper since the reported numbers in the paper are average of three runs.
### Citation
```bibtex
@inproceedings{
liu2021fast,
title={Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders},
author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
booktitle={EMNLP 2021},
year={2021}
}
```
|
casque/vaeFtMse840000Ema_v100
|
casque
| 2023-06-26T03:08:57Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-26T03:06:17Z |
---
license: creativeml-openrail-m
---
|
GaussianTech/llama-7b-sft
|
GaussianTech
| 2023-06-26T03:05:31Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-06-26T02:16:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
bghira/pseudo-journey-v2
|
bghira
| 2023-06-26T03:03:57Z | 47 | 12 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-22T01:32:36Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- text-to-image
---
# Capabilities
This model is "adventure" and "fantasy" focused.
With certain inference configurations, it is capable of producing very high quality results.
This model functions better without negative prompts than most fine-tunes.
# Inference parameters
Diffusers should "Just Work" with the config in this repository.
For A1111 users,
Scheduler: DDIM, 15-50 steps
Generally acceptable resolutions:
- 768x768
- 1024x1024
- 1152x768
# Limitations
This model contains a heavily tuned text encoder that has lost many original Stable Diffusion 2.1 concepts
This model is even less reliable at producing real people than the base 2.1-v model is
Training data included only 768x768 downsampled 1:1 ratio images, all other aspects were discarded. Ergo, this model struggles with high resolution native generations.
This model may have "burnt" outputs at higher CFG.
# Checkpoints
This model contains multiple revisions:
`02b28ff` (latest/main checkpoint)
30000 steps (approx 4 epochs) with terminal SNR on 22k Midjourney 5.1 images plus 7200 real photographs as balance data with complete BLIP captions on all data. BS=4, LR=4e-7 to 1e-8
`6d3949c` (retrained from ptx0/pseudo-journey)
[retrained: based on ptx0/pseudo-journey @ 4000 steps from stable-diffusion-2-1 baseline on 3300 images] + 9500 steps on 22,400 images, polynomial learning rate scheduler, batch size 4, 64 gradient accumulations, FROZEN text encoder, 8bit ADAM, ZERO PLW (no regularization data), followed by 550 steps with unfrozen text encoder and constant LR 1e-8
`9135a79` (original ckpt test)
13000 steps: trained from ptx0/pseudo-journey, polynomial learning rate scheduler, batch size 3, text encoder, 8bit ADAM, ZERO PLW (no regularization data)
|
MQ-playground/ppo-LunarLander-v2
|
MQ-playground
| 2023-06-26T02:50:53Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-25T08:52:18Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.97 +/- 22.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
bghira/pseudo-journey
|
bghira
| 2023-06-26T02:40:52Z | 10 | 2 |
diffusers
|
[
"diffusers",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-21T07:41:18Z |
# Deprecation notice
This model was a research project focused on the effect of fine-tuning the OpenCLIP text encoder.
It has been deprecated in favour of a newer checkpoint that continued training this model.
This model remains accessible as a test comparison and possible base model for fine-tuning.
# Training data
Base model: `stabilityai/stable-diffusion-2-1`
Data: 3300 midjourney 5.1 upscaled images with their captions.
# Training parameters
Duration: 4000 steps
LR scheduler: polynomial
Batch size: 3
Text encoder: Thawed
Optimizer: 8bit ADAM
No prior loss preservation
|
andreac94/finetuning-sentiment-model-FiestaAmericanaMtyReviews
|
andreac94
| 2023-06-26T02:40:02Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T02:03:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-FiestaAmericanaMtyReviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-FiestaAmericanaMtyReviews
This model is a fine-tuned version of [edumunozsala/roberta_bne_sentiment_analysis_es](https://huggingface.co/edumunozsala/roberta_bne_sentiment_analysis_es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2770
- Accuracy: 0.9381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sigmareaver/flan-ul2-4bit-128g-gptq
|
sigmareaver
| 2023-06-26T02:29:18Z | 3 | 8 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"fr",
"ro",
"de",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-25T15:55:03Z |
---
language:
- en
- fr
- ro
- de
- multilingual
thumbnail: "url to a thumbnail used in social sharing"
license: apache-2.0
metrics:
- mmlu
---
# flan-ul2 4-bit 128-groupsize GPTQ
Quantized using qwopqwop200's GPTQ-for-Llama repo on the t5 branch.<br>
Original model can be found here: [Google/flan-ul2](https://huggingface.co/google/flan-ul2)
Quantization command:
```
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512 python t5.py ../full-models/flan-ul2 wikitext2 --nsamples 256 --wbits 4 --act-order --groupsize 128 --save ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq.pt
```
Benchmark command:
```
python t5.py ../full-models/flan-ul2 wikitext2 --load ../gptq-models/flan-ul2-gptq/flan-ul2-4bit-128g-gptq2.pt --wbits 4 --groupsize 128 --benchmark --benchmark_mode mmlu
```
Results :
```
Average accuracy 0.289 - math
Average accuracy 0.562 - health
Average accuracy 0.416 - physics
Average accuracy 0.780 - business
Average accuracy 0.610 - biology
Average accuracy 0.446 - chemistry
Average accuracy 0.461 - computer science
Average accuracy 0.513 - economics
Average accuracy 0.538 - engineering
Average accuracy 0.455 - philosophy
Average accuracy 0.622 - other
Average accuracy 0.703 - history
Average accuracy 0.707 - geography
Average accuracy 0.718 - politics
Average accuracy 0.653 - psychology
Average accuracy 0.711 - culture
Average accuracy 0.447 - law
Average accuracy 0.416 - STEM
Average accuracy 0.501 - humanities
Average accuracy 0.643 - social sciences
Average accuracy 0.613 - other (business, health, misc.)
MMLU Average accuracy: 0.540
```
|
CJacobnriia/DavidRVC
|
CJacobnriia
| 2023-06-26T02:06:29Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2023-06-26T01:53:02Z |
---
language:
- en
---
"Aw, seriously?" - David
Note: This isn't David's voice in BFDI, it's just the one from David.wav.

|
djifg/chanhyuk-gd
|
djifg
| 2023-06-26T02:05:40Z | 76 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-26T00:34:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: chanhyuk-gd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chanhyuk-gd
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0838
- Wer: 9.4263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.246 | 0.18 | 500 | 0.2557 | 24.6951 |
| 0.1363 | 0.36 | 1000 | 0.1898 | 18.1750 |
| 0.094 | 0.54 | 1500 | 0.1450 | 14.4255 |
| 0.0842 | 0.72 | 2000 | 0.1100 | 15.4495 |
| 0.0595 | 0.9 | 2500 | 0.0916 | 10.6008 |
| 0.0167 | 1.08 | 3000 | 0.0838 | 9.4263 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
joohwan/ggb2
|
joohwan
| 2023-06-26T02:05:39Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-26T01:30:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ggb2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ggb2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2504
- Accuracy: 0.7867
- F1: 0.7902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7074 | 1.0 | 329 | 0.9372 | 0.6930 | 0.7034 |
| 0.2637 | 2.0 | 658 | 0.7453 | 0.7716 | 0.7691 |
| 0.1483 | 3.0 | 987 | 0.9178 | 0.7637 | 0.7687 |
| 0.1022 | 4.0 | 1316 | 1.1147 | 0.7665 | 0.7742 |
| 0.0695 | 5.0 | 1645 | 1.0453 | 0.7895 | 0.7941 |
| 0.0518 | 6.0 | 1974 | 0.9508 | 0.8185 | 0.8188 |
| 0.0414 | 7.0 | 2303 | 1.1806 | 0.7784 | 0.7831 |
| 0.0324 | 8.0 | 2632 | 1.1893 | 0.7947 | 0.7950 |
| 0.0272 | 9.0 | 2961 | 1.2167 | 0.7927 | 0.7955 |
| 0.0226 | 10.0 | 3290 | 1.2504 | 0.7867 | 0.7902 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
yhna/ppo-LunarLander-v2
|
yhna
| 2023-06-26T02:03:46Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-26T02:03:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.56 +/- 19.60
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lcw99/gpt-neo-1.3B-ko
|
lcw99
| 2023-06-26T01:54:06Z | 28 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"ko",
"dataset:oscar",
"dataset:lcw99/wikipedia-korean-20221001",
"dataset:heegyu/namuwiki-extracted",
"dataset:cc100",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-12T14:10:42Z |
---
language:
- ko
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
datasets:
- oscar
- lcw99/wikipedia-korean-20221001
- heegyu/namuwiki-extracted
- cc100
---
# gpt-neo-1.3B Korean version
PPL on Oscar Korean text dataset = 46.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.