modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Hamid20/speecht5_tts_persian
|
Hamid20
| 2024-01-18T23:31:13Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-18T23:18:31Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_tts_persian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_tts_persian
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.671 | 0.26 | 1000 | 0.5840 |
| 0.6024 | 0.52 | 2000 | 0.5517 |
| 0.6038 | 0.77 | 3000 | 0.5420 |
| 0.5884 | 1.03 | 4000 | 0.5369 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
riverjiang/lqtsnet
|
riverjiang
| 2024-01-18T23:28:11Z | 1 | 0 |
tf-keras
|
[
"tf-keras",
"en",
"license:gpl-3.0",
"region:us"
] | null | 2024-01-18T23:15:38Z |
---
license: gpl-3.0
language:
- en
---
# AI Long QT ECG analysis
Deep Neural Networks in Evaluation of Patients with Congenital Long QT Syndrome from the Surface 12-Lead Electrocardiogram
## Step 0: Install pip packages
Install python packages.
`python -m pip install -r requirements.txt`
## Step 1: Obtain MUSE ECGs
Should be in XML format and the beginning of the files start like this:
```xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE RestingECG SYSTEM "restecg.dtd">
```
## Step 2: Convert ECGs into CSV files
Run `python lqtnet/extract_ecg_xml.py`, which converts a folder containing XML ECG files into CSV format, normalizes the voltage data, and resamples all the files (to 2500 samples over the file, 250 Hz over 10 second recording).
## Step 3: Create metadata file
Create a `metadata` folder and in that create a CSV file with the following columns:
```csv
file,patient_id,ecg_id,id_site,set,lqts_type,dob,age,sex,ethnicity,date,hr,qt,qt_confirmed,qt_prolonged,qc,qc_reason
```
Descriptions for the columns:
- `file`: csv file name (without '.csv' file extension)
- `patient_id`: unique ID for patient (HiRO ID)
- `ecg_id`: unique ID for the ECG file
- `id_site`: HiRO site ID
- `set`: split, `Derivation`, `Internal validation`, or `External validation`
- `lqts_type`: either `Control`, `Type 1`, or `Type 2` based on genetic diagnosis
- `dob`: date of birth, yyyy-mm-dd
- `age`: age (in years)
- `sex`: `Female` or `Male`
- `ethnicity`: used for baseline characteristics and subsequent analysis
- `date`: date of ecg, yyyy-mm-dd
- `hr`: heart rate, for baseline characteristics and subsequent analysis
- `qt_manual`: correct QT interval (in milliseconds)
- `qt_manual_confirmed`: `True` or `False`, was the QT interval manually interpreted?
- `qc`: `True` or `False`, whether ECG passed manual quality control
- `qc_reason` (optional): description of QC issue with ECG
Use `lqtnet.import_metadata.convert_dtypes()` to convert the dtypes for the files for more efficient storage. We also suggest saving the metadata file as `pickle` or `parquet` format after importing it as a pandas `DataFrame`.
## Step 4: Quality control
Some of the files are missing parts of the leads, excessive noise, wandering leads, are corrupted and don't contain any ECG data, etc. Fill in this data into the above metadata file.
## Step 5: Run model inference
Please see example code below, showing inference for an `External validation` dataset:
```python
import lqtnet
# directory containing normalized CSV files
ECG_SOURCE_DIR = 'ecgs/csv_normalized_2500/'
MODEL_PATH = 'models/XYZ/'
metadata = pd.read_parquet('metadata/example_YYYYmmdd.parquet')
ext_df = metadata.query('set == "External validation" and qc == "Good"')
x_ext = lqtnet.import_ecgs.df_import_csv_to_numpy(ext_df, from_dir=ECG_SOURCE_DIR)
y_ext = lqtnet.import_ecgs.df_to_np_labels(ext_df)
model = lqtnet.train._load_model(MODEL_PATH)
# make predictions - save this output for further analysis
y_extval_pred = model.predict(x_extval)
```
|
llm-wizard/llama2_instruct_generation
|
llm-wizard
| 2024-01-18T23:25:14Z | 9 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-11-14T23:36:46Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama2_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2_instruct_generation
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6759
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8994 | 0.0 | 20 | 1.8109 |
| 1.8521 | 0.01 | 40 | 1.7830 |
| 1.8745 | 0.01 | 60 | 1.7694 |
| 1.8092 | 0.01 | 80 | 1.7576 |
| 1.8042 | 0.01 | 100 | 1.7436 |
| 1.9305 | 0.02 | 120 | 1.7090 |
| 1.7965 | 0.02 | 140 | 1.7034 |
| 1.8457 | 0.02 | 160 | 1.6977 |
| 1.823 | 0.02 | 180 | 1.6943 |
| 1.7997 | 0.03 | 200 | 1.6922 |
| 1.7614 | 0.03 | 220 | 1.6895 |
| 1.7701 | 0.03 | 240 | 1.6886 |
| 1.8093 | 0.04 | 260 | 1.6877 |
| 1.8101 | 0.04 | 280 | 1.6847 |
| 1.8109 | 0.04 | 300 | 1.6834 |
| 1.7523 | 0.04 | 320 | 1.6807 |
| 1.7575 | 0.05 | 340 | 1.6802 |
| 1.8497 | 0.05 | 360 | 1.6783 |
| 1.8347 | 0.05 | 380 | 1.6781 |
| 1.8019 | 0.05 | 400 | 1.6766 |
| 1.7267 | 0.06 | 420 | 1.6770 |
| 1.7849 | 0.06 | 440 | 1.6767 |
| 1.7727 | 0.06 | 460 | 1.6748 |
| 1.7796 | 0.07 | 480 | 1.6744 |
| 1.7963 | 0.07 | 500 | 1.6759 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Reboot87/distilbert-base-uncased-finetuned-discharge
|
Reboot87
| 2024-01-18T23:22:46Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-17T19:43:59Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-discharge
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-discharge
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9049
- Accuracy: 0.7425
- F1: 0.7419
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 814 | 0.5448 | 0.7052 | 0.7020 |
| 0.5626 | 2.0 | 1628 | 0.5420 | 0.7323 | 0.7322 |
| 0.5626 | 3.0 | 2442 | 0.5337 | 0.7535 | 0.7516 |
| 0.3498 | 4.0 | 3256 | 0.6277 | 0.7491 | 0.7482 |
| 0.3498 | 5.0 | 4070 | 0.7642 | 0.7432 | 0.7421 |
| 0.2189 | 6.0 | 4884 | 0.9049 | 0.7425 | 0.7419 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
DrishtiSharma/llama-2-7b-flash-attention2-lora-patent-classification
|
DrishtiSharma
| 2024-01-18T23:15:20Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:patent-classification",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:adapter:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-18T23:15:03Z |
---
library_name: peft
tags:
- generated_from_trainer
datasets:
- patent-classification
metrics:
- accuracy
base_model: NousResearch/Llama-2-7b-hf
model-index:
- name: llama-2-7b-flash-attention2-lora-patent-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-2-7b-flash-attention2-lora-patent-classification
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the patent-classification dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5598
- Accuracy: 0.436
- Precision Macro: 0.4276
- Recall Macro: 0.3658
- F1-score Macro: 0.3707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1-score Macro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------------:|:------------:|:--------------:|
| 1.4059 | 1.0 | 6250 | 1.9046 | 0.3748 | 0.3815 | 0.3173 | 0.3012 |
| 1.1153 | 2.0 | 12500 | 1.6457 | 0.419 | 0.4162 | 0.3461 | 0.3466 |
| 1.0234 | 3.0 | 18750 | 1.5598 | 0.436 | 0.4276 | 0.3658 | 0.3707 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
qfrodicio/roberta-finetuned-intention-prediction-es
|
qfrodicio
| 2024-01-18T23:12:19Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:MMG/mlm-spanish-roberta-base",
"base_model:finetune:MMG/mlm-spanish-roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-10T21:24:10Z |
---
base_model: MMG/mlm-spanish-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-finetuned-intention-prediction-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-intention-prediction-es
This model is a fine-tuned version of [MMG/mlm-spanish-roberta-base](https://huggingface.co/MMG/mlm-spanish-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9097
- Accuracy: 0.6918
- Precision: 0.6953
- Recall: 0.6918
- F1: 0.6848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 2.2985 | 1.0 | 102 | 1.7435 | 0.4970 | 0.4378 | 0.4970 | 0.4215 |
| 1.3399 | 2.0 | 204 | 1.4205 | 0.5828 | 0.5872 | 0.5828 | 0.5624 |
| 0.8893 | 3.0 | 306 | 1.2699 | 0.6393 | 0.6276 | 0.6393 | 0.6192 |
| 0.5691 | 4.0 | 408 | 1.3327 | 0.6515 | 0.6604 | 0.6515 | 0.6417 |
| 0.3837 | 5.0 | 510 | 1.3836 | 0.6592 | 0.6710 | 0.6592 | 0.6528 |
| 0.2543 | 6.0 | 612 | 1.4253 | 0.6641 | 0.6703 | 0.6641 | 0.6528 |
| 0.1669 | 7.0 | 714 | 1.5317 | 0.6650 | 0.6795 | 0.6650 | 0.6546 |
| 0.1139 | 8.0 | 816 | 1.5939 | 0.6725 | 0.6754 | 0.6725 | 0.6615 |
| 0.0805 | 9.0 | 918 | 1.6987 | 0.6594 | 0.6696 | 0.6594 | 0.6518 |
| 0.0578 | 10.0 | 1020 | 1.6960 | 0.6793 | 0.6782 | 0.6793 | 0.6690 |
| 0.0374 | 11.0 | 1122 | 1.7590 | 0.6824 | 0.6877 | 0.6824 | 0.6729 |
| 0.03 | 12.0 | 1224 | 1.7425 | 0.6842 | 0.6859 | 0.6842 | 0.6785 |
| 0.0183 | 13.0 | 1326 | 1.8165 | 0.6830 | 0.6846 | 0.6830 | 0.6774 |
| 0.0152 | 14.0 | 1428 | 1.8348 | 0.6866 | 0.6927 | 0.6866 | 0.6799 |
| 0.0109 | 15.0 | 1530 | 1.8562 | 0.6940 | 0.6967 | 0.6940 | 0.6855 |
| 0.0097 | 16.0 | 1632 | 1.8766 | 0.6889 | 0.6947 | 0.6889 | 0.6833 |
| 0.0073 | 17.0 | 1734 | 1.8745 | 0.6920 | 0.6948 | 0.6920 | 0.6851 |
| 0.0062 | 18.0 | 1836 | 1.8944 | 0.6895 | 0.6919 | 0.6895 | 0.6825 |
| 0.0057 | 19.0 | 1938 | 1.9103 | 0.6936 | 0.6984 | 0.6936 | 0.6867 |
| 0.0052 | 20.0 | 2040 | 1.9097 | 0.6918 | 0.6953 | 0.6918 | 0.6848 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
nm-testing/starcoderbase-1b-pruned50-quant
|
nm-testing
| 2024-01-18T22:45:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T22:43:32Z |
```python
model_id = "mgoin/starcoderbase-1b-pruned50-quant"
# Load model with SparseAutoModel
from sparseml.transformers.utils import SparseAutoModel
from transformers import AutoConfig
config = AutoConfig.from_pretrained(model_id) # Why does SparseAutoModel need config?
model = SparseAutoModel.text_generation_from_pretrained(model_id, config=config)
# Apply recipe to model
# Note: Really annoying we can't grab the recipe.yaml present in the uploaded model
# and you need this separate apply_recipe_structure_to_model function
from sparseml.pytorch.model_load.helpers import apply_recipe_structure_to_model
from huggingface_hub import hf_hub_download
import os
recipe_path = hf_hub_download(repo_id=model_id, filename="recipe.yaml")
apply_recipe_structure_to_model(
model=model, recipe_path=recipe_path, model_path=os.path.dirname(recipe_path)
)
# Regular HF inference
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
"""
def print_hello_world():
print("Hello World!")
print_hello_world
"""
```
|
jeiku/RocketHermesZephyrBoros_3B
|
jeiku
| 2024-01-18T22:44:11Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2212.04089",
"base_model:cxllin/StableHermes-3b",
"base_model:merge:cxllin/StableHermes-3b",
"base_model:jeiku/Rosa_v1_3B",
"base_model:merge:jeiku/Rosa_v1_3B",
"base_model:jondurbin/airoboros-3b-3p0",
"base_model:merge:jondurbin/airoboros-3b-3p0",
"base_model:pansophic/rocket-3B",
"base_model:merge:pansophic/rocket-3B",
"base_model:stabilityai/stablelm-zephyr-3b",
"base_model:merge:stabilityai/stablelm-zephyr-3b",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-18T22:24:55Z |
---
base_model:
- pansophic/rocket-3B
- jeiku/Rosa_v1_3B
- jondurbin/airoboros-3b-3p0
- cxllin/StableHermes-3b
- stabilityai/stablelm-zephyr-3b
tags:
- mergekit
- merge
---
# Luna7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) as a base.
### Models Merged
The following models were included in the merge:
* [pansophic/rocket-3B](https://huggingface.co/pansophic/rocket-3B)
* [jondurbin/airoboros-3b-3p0](https://huggingface.co/jondurbin/airoboros-3b-3p0)
* [cxllin/StableHermes-3b](https://huggingface.co/cxllin/StableHermes-3b)
* [stabilityai/stablelm-zephyr-3b](https://huggingface.co/stabilityai/stablelm-zephyr-3b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: task_arithmetic
base_model: jeiku/Rosa_v1_3B
models:
- model: jondurbin/airoboros-3b-3p0
parameters:
weight: 0.35
- model: pansophic/rocket-3B
parameters:
weight: 0.15
- model: stabilityai/stablelm-zephyr-3b
parameters:
weight: 0.15
- model: cxllin/StableHermes-3b
parameters:
weight: 0.35
dtype: float16
```
|
pratiknichite/TrainedTextSummerizerBART
|
pratiknichite
| 2024-01-18T22:33:23Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-18T20:21:42Z |
---
library_name: transformers
pipeline_tag: summarization
---
# Trained Text Summerizer BART:
- This model is fine-tuned version of the **facebook/bart-large-cnn** model.
- **TrainedTextSummerizerBART** is a model trained on **SAMSum** dataset from Hugging Face for summarization tasks.
|
pszemraj/xtremedistil-l12-h384-uncased-zeroshot-v1.1
|
pszemraj
| 2024-01-18T22:18:21Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"zero-shot-classification",
"base_model:microsoft/xtremedistil-l12-h384-uncased",
"base_model:finetune:microsoft/xtremedistil-l12-h384-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2024-01-18T08:03:06Z |
---
license: mit
base_model: microsoft/xtremedistil-l12-h384-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xtremedistil-l12-h384-uncased-zeroshot-v1.1-none
results: []
pipeline_tag: zero-shot-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l12-h384-uncased-zeroshot-v1.1-none
A slightly larger sibling to https://hf.co/MoritzLaurer/xtremedistil-l6-h256-zeroshot-v1.1-all-33
## Model description
This model is a fine-tuned version of [microsoft/xtremedistil-l12-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l12-h384-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2063
- F1 Macro: 0.5570
- F1 Micro: 0.6385
- Accuracy Balanced: 0.6104
- Accuracy: 0.6385
- Precision Macro: 0.5705
- Recall Macro: 0.6104
- Precision Micro: 0.6385
- Recall Micro: 0.6385
## Training and evaluation data
See https://github.com/MoritzLaurer/zeroshot-classifier/blob/main/datasets_overview.csv
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 80085
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.04
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Micro | Accuracy Balanced | Accuracy | Precision Macro | Recall Macro | Precision Micro | Recall Micro |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|:-----------------:|:--------:|:---------------:|:------------:|:---------------:|:------------:|
| 0.2756 | 0.32 | 5000 | 0.4155 | 0.8146 | 0.8255 | 0.8215 | 0.8255 | 0.8101 | 0.8215 | 0.8255 | 0.8255 |
| 0.2395 | 0.65 | 10000 | 0.4166 | 0.8182 | 0.8303 | 0.8222 | 0.8303 | 0.8151 | 0.8222 | 0.8303 | 0.8303 |
| 0.2464 | 0.97 | 15000 | 0.4114 | 0.8204 | 0.8325 | 0.8239 | 0.8325 | 0.8175 | 0.8239 | 0.8325 | 0.8325 |
| 0.2105 | 1.3 | 20000 | 0.4051 | 0.8236 | 0.8363 | 0.8254 | 0.8363 | 0.8219 | 0.8254 | 0.8363 | 0.8363 |
| 0.2267 | 1.62 | 25000 | 0.4030 | 0.8244 | 0.8373 | 0.8257 | 0.8373 | 0.8231 | 0.8257 | 0.8373 | 0.8373 |
| 0.2312 | 1.95 | 30000 | 0.4088 | 0.8233 | 0.836 | 0.8250 | 0.836 | 0.8217 | 0.8250 | 0.836 | 0.836 |
| 0.2241 | 2.27 | 35000 | 0.4061 | 0.8257 | 0.8375 | 0.8291 | 0.8375 | 0.8229 | 0.8291 | 0.8375 | 0.8375 |
| 0.2183 | 2.6 | 40000 | 0.4043 | 0.8259 | 0.838 | 0.8285 | 0.838 | 0.8235 | 0.8285 | 0.838 | 0.838 |
| 0.2285 | 2.92 | 45000 | 0.4041 | 0.8241 | 0.8365 | 0.8263 | 0.8365 | 0.8220 | 0.8263 | 0.8365 | 0.8365 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
RamAbhishek/finetuning-sentiment-model-3000-samples
|
RamAbhishek
| 2024-01-18T22:16:05Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T20:50:52Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3254
- Accuracy: 0.87
- F1: 0.8713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jeiku/Everything_v3_128_StableLM
|
jeiku
| 2024-01-18T22:09:57Z | 2 | 0 |
peft
|
[
"peft",
"stablelm_epoch",
"generated_from_trainer",
"custom_code",
"base_model:stabilityai/stablelm-3b-4e1t",
"base_model:adapter:stabilityai/stablelm-3b-4e1t",
"license:cc-by-sa-4.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-18T22:09:01Z |
---
license: cc-by-sa-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: stabilityai/stablelm-3b-4e1t
model-index:
- name: everything
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: stabilityai/stablelm-3b-4e1t
base_model_config: stabilityai/stablelm-3b-4e1t
trust_remote_code: true
model_type: AutoModelForCausalLM
tokenizer_type: GPTNeoXTokenizerFast
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: totally-not-an-llm/EverythingLM-data-V3
type: alpaca
dataset_prepared_path:
val_set_size: 0.005
output_dir: ./everything
adapter: qlora
sequence_len: 1024
sample_packing: false
pad_to_sequence_len: true
save_safetensors: false
lora_r: 128
lora_alpha: 256
lora_dropout: 0.05
lora_target_linear: false
lora_fan_in_fan_out:
lora_modules_to_save:
- embed_tokens
- lm_head
lora_target_modules:
- q_proj
- v_proj
gradient_accumulation_steps: 1
micro_batch_size: 16
num_epochs: 5
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.00005
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 1
eval_table_size:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<|endoftext|>"
eos_token: "<|im_end|>"
unk_token: "<|endoftext|>"
tokens:
- "<|im_start|>"
- "<|im_end|>"
```
</details><br>
# everything
This model is a fine-tuned version of [stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8096
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3873 | 0.02 | 1 | 1.6256 |
| 1.1399 | 1.0 | 60 | 0.8263 |
| 1.5717 | 2.0 | 120 | 0.8054 |
| 0.6861 | 3.0 | 180 | 0.8149 |
| 0.9527 | 4.0 | 240 | 0.8036 |
| 0.5738 | 5.0 | 300 | 0.8096 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jeiku/Theory_of_Mind_RP_128_StableLM
|
jeiku
| 2024-01-18T22:08:15Z | 4 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:jeiku/ToxicNoRobotsRosaHermesBoros_3B",
"base_model:adapter:jeiku/ToxicNoRobotsRosaHermesBoros_3B",
"region:us"
] | null | 2024-01-18T22:07:15Z |
---
library_name: peft
base_model: jeiku/ToxicNoRobotsRosaHermesBoros_3B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0
|
Ghunghru/Misinformation-Covid-bert-base-german-cased
|
Ghunghru
| 2024-01-18T22:04:54Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T13:11:03Z |
---
license: mit
base_model: bert-base-german-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-bert-base-german-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-bert-base-german-cased
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7182
- F1: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6609 | 1.0 | 189 | 0.6062 | 0.0 |
| 0.6168 | 2.0 | 378 | 0.5649 | 0.1818 |
| 0.5638 | 3.0 | 567 | 0.5665 | 0.1818 |
| 0.5382 | 4.0 | 756 | 0.5790 | 0.2128 |
| 0.5399 | 5.0 | 945 | 0.5459 | 0.3284 |
| 0.4745 | 6.0 | 1134 | 0.7525 | 0.3265 |
| 0.5061 | 7.0 | 1323 | 0.6379 | 0.3529 |
| 0.377 | 8.0 | 1512 | 0.6965 | 0.3692 |
| 0.4159 | 9.0 | 1701 | 0.7172 | 0.3478 |
| 0.3924 | 10.0 | 1890 | 0.7182 | 0.3333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
homerquan/taxi-rl
|
homerquan
| 2024-01-18T21:56:03Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T21:56:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-rl
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="homerquan/taxi-rl", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sandeepsundaram/phi2-ultrachat-qlora
|
sandeepsundaram
| 2024-01-18T21:53:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T21:19:04Z |
---
license: mit
datasets:
- HuggingFaceH4/ultrachat_200k
language:
- en
---
## Model Summary
phi2-ultrachat-qlora is a Transformer fine tuned using the ultrachat dataset.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
### Inference Code:
```python
import warnings
from transformers import AutoModelForCausalLM, AutoTokenizer
path= f"sandeepsundaram/phi2-ultrachat-qlora"
tokenizer = AutoTokenizer.from_pretrained(path)
tokenizer.eos_token_id = model.config.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
warnings.filterwarnings('ignore') # Ignore all warnings
#inputs = tokenizer('Question: why human are cute then human? write in the form of poem. \n Output: ', return_tensors="pt", return_attention_mask=False).to('cuda')
inputs = tokenizer('''write code for fibonaci series in python.''', return_tensors="pt", return_attention_mask=False).to('cuda')
generation_params = {
'max_length': 512,
'do_sample': True,
'temperature': .5,
'top_p': 0.9,
'top_k': 50
}
outputs = model.generate(**inputs, **generation_params)
decoded_outputs = tokenizer.batch_decode(outputs)
for text in decoded_outputs:
text = text.replace('\\n', '\n')
print(text)
print("\n\n")
```
|
longlonglong23/llama2-qlora-finetunined-french
|
longlonglong23
| 2024-01-18T21:40:56Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-18T21:40:50Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ksuyash/finetuned-indian-food
|
ksuyash
| 2024-01-18T21:40:52Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-17T18:47:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: finetuned-indian-food
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: indian_food_images
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9980858191099059
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-indian-food
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the indian_food_images dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0144
- Accuracy: 0.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9849 | 0.26 | 100 | 0.8445 | 0.8721 |
| 0.4628 | 0.51 | 200 | 0.4435 | 0.9201 |
| 0.4738 | 0.77 | 300 | 0.3339 | 0.9336 |
| 0.3603 | 1.02 | 400 | 0.2924 | 0.9328 |
| 0.1792 | 1.28 | 500 | 0.1862 | 0.9560 |
| 0.2304 | 1.53 | 600 | 0.1352 | 0.9711 |
| 0.1512 | 1.79 | 700 | 0.1244 | 0.9689 |
| 0.1805 | 2.04 | 800 | 0.0843 | 0.9805 |
| 0.1672 | 2.3 | 900 | 0.0576 | 0.9879 |
| 0.0154 | 2.55 | 1000 | 0.0498 | 0.9900 |
| 0.0357 | 2.81 | 1100 | 0.0359 | 0.9933 |
| 0.0241 | 3.06 | 1200 | 0.0290 | 0.9951 |
| 0.0133 | 3.32 | 1300 | 0.0228 | 0.9967 |
| 0.0088 | 3.57 | 1400 | 0.0193 | 0.9970 |
| 0.0511 | 3.83 | 1500 | 0.0144 | 0.9981 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bardsai/whisper-large-v2-pl-v2
|
bardsai
| 2024-01-18T21:40:17Z | 1,391 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"pl",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-12-14T15:05:57Z |
---
language:
- pl
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Large v2 PL
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: pl
split: test
args: pl
metrics:
- type: wer
value: 7.280175959972464
name: WER
- type: wer
value: 7.31
name: WER
- type: wer_without_norm
value: 20.18
name: WER unnormalized
- type: cer
value: 2.08
name: CER
- type: mer
value: 7.27
name: MER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: facebook/voxpopuli
type: facebook/voxpopuli
config: pl
split: test
metrics:
- type: wer
value: 9.61
name: WER
- type: wer_without_norm
value: 30.33
name: WER unnormalized
- type: cer
value: 5.5
name: CER
- type: mer
value: 9.45
name: MER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: pl_pl
split: test
metrics:
- type: wer
value: 8.68
name: WER
- type: wer_without_norm
value: 29.33
name: WER unnormalized
- type: cer
value: 3.63
name: CER
- type: mer
value: 8.62
name: MER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large v2 PL
This model is a fine-tuned version of [bardsai/whisper-large-v2-pl](https://huggingface.co/bardsai/whisper-large-v2-pl) on the Common Voice 11.0 and the FLEURS datasets.
It achieves the following results on the evaluation set:
- Loss: 0.3684
- Wer: 7.2802
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0047 | 1.35 | 700 | 0.3428 | 8.5562 |
| 0.0011 | 2.7 | 1400 | 0.3605 | 7.5505 |
| 0.0003 | 4.05 | 2100 | 0.3684 | 7.2802 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
mohdadeeb/DR-ViT
|
mohdadeeb
| 2024-01-18T21:26:28Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-18T21:26:14Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_keras_callback
model-index:
- name: DR-ViT
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# DR-ViT
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7068
- Train Accuracy: 0.7214
- Train Top-3-accuracy: 0.9677
- Validation Loss: 0.6596
- Validation Accuracy: 0.7345
- Validation Top-3-accuracy: 0.9782
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.8883 | 0.6645 | 0.9255 | 0.7075 | 0.7200 | 0.9655 | 0 |
| 0.7068 | 0.7214 | 0.9677 | 0.6596 | 0.7345 | 0.9782 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
feulf/EvolCodeLlama-7b
|
feulf
| 2024-01-18T21:14:00Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-18T20:31:51Z |
---
license: llama2
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: EvolCodeLlama-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.3.0`
```yaml
base_model: codellama/CodeLlama-7b-hf
base_model_config: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
hub_model_id: EvolCodeLlama-7b
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: mlabonne/Evol-Instruct-Python-1k
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.02
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
eval_steps: 0.01
save_strategy: epoch
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# EvolCodeLlama-7b
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3472 | 0.01 | 1 | 0.4986 |
| 0.3139 | 0.03 | 4 | 0.4985 |
| 0.2981 | 0.07 | 8 | 0.4983 |
| 0.4311 | 0.1 | 12 | 0.4979 |
| 0.3958 | 0.14 | 16 | 0.4960 |
| 0.335 | 0.17 | 20 | 0.4915 |
| 0.4286 | 0.2 | 24 | 0.4808 |
| 0.4011 | 0.24 | 28 | 0.4629 |
| 0.3269 | 0.27 | 32 | 0.4445 |
| 0.2559 | 0.31 | 36 | 0.4284 |
| 0.3786 | 0.34 | 40 | 0.4174 |
| 0.2967 | 0.37 | 44 | 0.4107 |
| 0.2677 | 0.41 | 48 | 0.4027 |
| 0.2455 | 0.44 | 52 | 0.3959 |
| 0.3267 | 0.47 | 56 | 0.3916 |
| 0.2902 | 0.51 | 60 | 0.3882 |
| 0.1845 | 0.54 | 64 | 0.3878 |
| 0.2593 | 0.58 | 68 | 0.3869 |
| 0.3104 | 0.61 | 72 | 0.3836 |
| 0.3799 | 0.64 | 76 | 0.3819 |
| 0.2059 | 0.68 | 80 | 0.3794 |
| 0.3177 | 0.71 | 84 | 0.3792 |
| 0.2307 | 0.75 | 88 | 0.3768 |
| 0.282 | 0.78 | 92 | 0.3749 |
| 0.2713 | 0.81 | 96 | 0.3738 |
| 0.2948 | 0.85 | 100 | 0.3725 |
| 0.2311 | 0.88 | 104 | 0.3713 |
| 0.2516 | 0.92 | 108 | 0.3716 |
| 0.2462 | 0.95 | 112 | 0.3715 |
| 0.2035 | 0.98 | 116 | 0.3711 |
| 0.2638 | 1.02 | 120 | 0.3712 |
| 0.2477 | 1.05 | 124 | 0.3726 |
| 0.1986 | 1.08 | 128 | 0.3682 |
| 0.2292 | 1.12 | 132 | 0.3671 |
| 0.1549 | 1.15 | 136 | 0.3680 |
| 0.1953 | 1.19 | 140 | 0.3683 |
| 0.224 | 1.22 | 144 | 0.3671 |
| 0.1941 | 1.25 | 148 | 0.3687 |
| 0.2234 | 1.29 | 152 | 0.3709 |
| 0.2659 | 1.32 | 156 | 0.3700 |
| 0.2535 | 1.36 | 160 | 0.3689 |
| 0.2115 | 1.39 | 164 | 0.3683 |
| 0.2481 | 1.42 | 168 | 0.3693 |
| 0.2101 | 1.46 | 172 | 0.3699 |
| 0.228 | 1.49 | 176 | 0.3697 |
| 0.3159 | 1.53 | 180 | 0.3680 |
| 0.2257 | 1.56 | 184 | 0.3664 |
| 0.1684 | 1.59 | 188 | 0.3670 |
| 0.2277 | 1.63 | 192 | 0.3663 |
| 0.2787 | 1.66 | 196 | 0.3668 |
| 0.2284 | 1.69 | 200 | 0.3654 |
| 0.2789 | 1.73 | 204 | 0.3640 |
| 0.2089 | 1.76 | 208 | 0.3632 |
| 0.3387 | 1.8 | 212 | 0.3633 |
| 0.2677 | 1.83 | 216 | 0.3610 |
| 0.2684 | 1.86 | 220 | 0.3609 |
| 0.2458 | 1.9 | 224 | 0.3610 |
| 0.2808 | 1.93 | 228 | 0.3602 |
| 0.2895 | 1.97 | 232 | 0.3596 |
| 0.323 | 2.0 | 236 | 0.3591 |
| 0.2105 | 2.03 | 240 | 0.3623 |
| 0.1911 | 2.07 | 244 | 0.3720 |
| 0.2888 | 2.1 | 248 | 0.3802 |
| 0.1958 | 2.13 | 252 | 0.3748 |
| 0.1785 | 2.17 | 256 | 0.3701 |
| 0.2604 | 2.2 | 260 | 0.3709 |
| 0.2212 | 2.24 | 264 | 0.3737 |
| 0.1996 | 2.27 | 268 | 0.3772 |
| 0.1567 | 2.3 | 272 | 0.3778 |
| 0.1777 | 2.34 | 276 | 0.3778 |
| 0.2642 | 2.37 | 280 | 0.3785 |
| 0.1907 | 2.4 | 284 | 0.3796 |
| 0.1637 | 2.44 | 288 | 0.3785 |
| 0.1778 | 2.47 | 292 | 0.3785 |
| 0.144 | 2.51 | 296 | 0.3789 |
| 0.1758 | 2.54 | 300 | 0.3788 |
| 0.2018 | 2.57 | 304 | 0.3784 |
| 0.3126 | 2.61 | 308 | 0.3783 |
| 0.1623 | 2.64 | 312 | 0.3790 |
| 0.223 | 2.68 | 316 | 0.3798 |
| 0.2109 | 2.71 | 320 | 0.3797 |
| 0.1606 | 2.74 | 324 | 0.3797 |
| 0.2226 | 2.78 | 328 | 0.3796 |
| 0.2068 | 2.81 | 332 | 0.3798 |
| 0.1547 | 2.85 | 336 | 0.3797 |
| 0.2513 | 2.88 | 340 | 0.3796 |
| 0.2688 | 2.91 | 344 | 0.3797 |
| 0.1481 | 2.95 | 348 | 0.3796 |
| 0.1443 | 2.98 | 352 | 0.3797 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coollaps/result
|
Coollaps
| 2024-01-18T21:04:16Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"camembert",
"token-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base",
"base_model:finetune:almanach/camembert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-18T20:49:12Z |
---
license: mit
base_model: camembert-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: result
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# result
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0181
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 25 | 0.0900 | 0.0 | 0.0 | 0.0 | 0.9818 |
| No log | 2.0 | 50 | 0.0557 | 0.0 | 0.0 | 0.0 | 0.9822 |
| No log | 3.0 | 75 | 0.0546 | 0.0 | 0.0 | 0.0 | 0.9889 |
| No log | 4.0 | 100 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.9945 |
| No log | 5.0 | 125 | 0.0240 | 0.0 | 0.0 | 0.0 | 0.9941 |
| No log | 6.0 | 150 | 0.0179 | 0.0 | 0.0 | 0.0 | 0.9956 |
| No log | 7.0 | 175 | 0.0163 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 8.0 | 200 | 0.0189 | 0.0 | 0.0 | 0.0 | 0.9952 |
| No log | 9.0 | 225 | 0.0169 | 0.0 | 0.0 | 0.0 | 0.9960 |
| No log | 10.0 | 250 | 0.0145 | 0.0 | 0.0 | 0.0 | 0.9968 |
| No log | 11.0 | 275 | 0.0164 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 12.0 | 300 | 0.0153 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 13.0 | 325 | 0.0153 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 14.0 | 350 | 0.0161 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 15.0 | 375 | 0.0164 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 16.0 | 400 | 0.0166 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 17.0 | 425 | 0.0161 | 0.0 | 0.0 | 0.0 | 0.9964 |
| No log | 18.0 | 450 | 0.0166 | 0.0 | 0.0 | 0.0 | 0.9956 |
| No log | 19.0 | 475 | 0.0182 | 0.0 | 0.0 | 0.0 | 0.9960 |
| 0.0262 | 20.0 | 500 | 0.0181 | 0.0 | 0.0 | 0.0 | 0.9960 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
chargoddard/internlm2-base-7b-llama
|
chargoddard
| 2024-01-18T21:01:57Z | 14 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-base-7b",
"base_model:finetune:internlm/internlm2-base-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T06:12:44Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-base-7b
---
# InternLM (but it's Llama)
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">hot??</font></i>
</a>
</sup>
<div> </div>
</div>
</div>
[internlm2-base-7b](https://huggingface.co/internlm/internlm2-base-7b) converted into Llama-format weights.
Subject to internlm's license.
|
anakib1/whisper-multi-diar-wer
|
anakib1
| 2024-01-18T21:01:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-18T19:18:32Z |
---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-multi-diar-wer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-multi-diar-wer
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.0666
- Wer: 582.8864
- Cer: 168.6522
- Speech Scored: 693831
- Speech Miss: 52252
- Speech Falarm: 117030
- Speaker Miss: 52252
- Speaker Falarm: 117030
- Speaker Error: 187216
- Speaker Correct: 1437.5240
- Diarization Error: 356498
- Frames: 600
- Speaker Wide Frames: 746083
- Speech Scored Ratio: 1156.385
- Speech Miss Ratio: 87.0867
- Speech Falarm Ratio: 195.05
- Speaker Correct Ratio: 2.3959
- Speaker Miss Ratio: 0.0700
- Speaker Falarm Ratio: 0.1569
- Speaker Error Ratio: 0.2509
- Diarization Error Ratio: 0.4778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Speech Scored | Speech Miss | Speech Falarm | Speaker Miss | Speaker Falarm | Speaker Error | Speaker Correct | Diarization Error | Frames | Speaker Wide Frames | Speech Scored Ratio | Speech Miss Ratio | Speech Falarm Ratio | Speaker Correct Ratio | Speaker Miss Ratio | Speaker Falarm Ratio | Speaker Error Ratio | Diarization Error Ratio |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-------------:|:-----------:|:-------------:|:------------:|:--------------:|:-------------:|:---------------:|:-----------------:|:------:|:-------------------:|:-------------------:|:-----------------:|:-------------------:|:---------------------:|:------------------:|:--------------------:|:-------------------:|:-----------------------:|
| 11.3437 | 1.0 | 42 | 10.7905 | 574.9471 | 166.1650 | 743633 | 2450 | 150302 | 2450 | 150302 | 202641 | 1427.9773 | 355393 | 600 | 746083 | 1239.3883 | 4.0833 | 250.5033 | 2.3800 | 0.0033 | 0.2015 | 0.2716 | 0.4763 |
| 10.3627 | 2.0 | 84 | 10.4901 | 578.0875 | 167.1479 | 735121 | 10962 | 136397 | 10962 | 136397 | 201434 | 1433.1820 | 348793 | 600 | 746083 | 1225.2017 | 18.27 | 227.3283 | 2.3886 | 0.0147 | 0.1828 | 0.2700 | 0.4675 |
| 9.9444 | 3.0 | 126 | 10.3015 | 569.6851 | 166.4943 | 715188 | 30895 | 127221 | 30895 | 127221 | 194291 | 1435.5347 | 352407 | 600 | 746083 | 1191.98 | 51.4917 | 212.035 | 2.3926 | 0.0414 | 0.1705 | 0.2604 | 0.4723 |
| 9.7658 | 4.0 | 168 | 10.2071 | 572.0536 | 166.8688 | 706081 | 40002 | 122962 | 40002 | 122962 | 191357 | 1436.2147 | 354321 | 600 | 746083 | 1176.8017 | 66.67 | 204.9367 | 2.3937 | 0.0536 | 0.1648 | 0.2565 | 0.4749 |
| 9.5093 | 5.0 | 210 | 10.1640 | 572.3712 | 166.9189 | 703250 | 42833 | 121335 | 42833 | 121335 | 190255 | 1436.8813 | 354423 | 600 | 746083 | 1172.0833 | 71.3883 | 202.225 | 2.3948 | 0.0574 | 0.1626 | 0.2550 | 0.4750 |
| 9.3069 | 6.0 | 252 | 10.1287 | 573.2534 | 167.0644 | 700202 | 45881 | 119938 | 45881 | 119938 | 189349 | 1436.9886 | 355168 | 600 | 746083 | 1167.0033 | 76.4683 | 199.8967 | 2.3950 | 0.0615 | 0.1608 | 0.2538 | 0.4760 |
| 9.2209 | 7.0 | 294 | 10.1009 | 582.8864 | 168.6522 | 698009 | 48074 | 118866 | 48074 | 118866 | 188639 | 1437.1880 | 355579 | 600 | 746083 | 1163.3483 | 80.1233 | 198.11 | 2.3953 | 0.0644 | 0.1593 | 0.2528 | 0.4766 |
| 9.0761 | 8.0 | 336 | 10.0912 | 582.8864 | 168.6522 | 695719 | 50364 | 117834 | 50364 | 117834 | 187684 | 1437.6227 | 355882 | 600 | 746083 | 1159.5317 | 83.94 | 196.39 | 2.3960 | 0.0675 | 0.1579 | 0.2516 | 0.4770 |
| 8.9928 | 9.0 | 378 | 10.0654 | 582.8864 | 168.6522 | 694031 | 52052 | 117145 | 52052 | 117145 | 187295 | 1437.4753 | 356492 | 600 | 746083 | 1156.7183 | 86.7533 | 195.2417 | 2.3958 | 0.0698 | 0.1570 | 0.2510 | 0.4778 |
| 8.9674 | 10.0 | 420 | 10.0666 | 582.8864 | 168.6522 | 693831 | 52252 | 117030 | 52252 | 117030 | 187216 | 1437.5240 | 356498 | 600 | 746083 | 1156.385 | 87.0867 | 195.05 | 2.3959 | 0.0700 | 0.1569 | 0.2509 | 0.4778 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
efalcon/distilroberta-base-finetuned-wikitext2
|
efalcon
| 2024-01-18T20:59:05Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-18T20:51:33Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0923 | 1.0 | 2406 | 1.9286 |
| 1.9988 | 2.0 | 4812 | 1.8677 |
| 1.9417 | 3.0 | 7218 | 1.8551 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
2O24dpower2024/xlm-roberta-base-finetuned-panx-all
|
2O24dpower2024
| 2024-01-18T20:55:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-12T23:37:46Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1742
- F1: 0.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3026 | 1.0 | 835 | 0.1851 | 0.8182 |
| 0.1575 | 2.0 | 1670 | 0.1712 | 0.8413 |
| 0.1031 | 3.0 | 2505 | 0.1742 | 0.8541 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ElonTusk2001/zephyr-7b-sft-qlora
|
ElonTusk2001
| 2024-01-18T20:53:26Z | 69 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-15T12:03:54Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-qlora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.913 | 1.0 | 17428 | 0.9523 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
efalcon/distilgpt2-finetuned-wikitext2
|
efalcon
| 2024-01-18T20:51:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T20:44:10Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7556 | 1.0 | 2334 | 3.6647 |
| 3.6394 | 2.0 | 4668 | 3.6488 |
| 3.5939 | 3.0 | 7002 | 3.6426 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TunahanGokcimen/Question-Answering-xlm-roberta-base
|
TunahanGokcimen
| 2024-01-18T20:50:46Z | 22 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/xlm-roberta-base-squad2-distilled",
"base_model:finetune:deepset/xlm-roberta-base-squad2-distilled",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-18T18:30:45Z |
---
license: mit
base_model: deepset/xlm-roberta-base-squad2-distilled
tags:
- generated_from_trainer
model-index:
- name: Question-Answering-xlm-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question-Answering-xlm-roberta-base
This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2-distilled](https://huggingface.co/deepset/xlm-roberta-base-squad2-distilled) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
2O24dpower2024/xlm-roberta-base-finetuned-panx-it
|
2O24dpower2024
| 2024-01-18T20:50:19Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-12T23:32:00Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8253578732106339
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- F1: 0.8254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8096 | 1.0 | 70 | 0.3540 | 0.7383 |
| 0.299 | 2.0 | 140 | 0.2677 | 0.7891 |
| 0.1937 | 3.0 | 210 | 0.2564 | 0.8254 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
konsman/setfit-messages-generated-test
|
konsman
| 2024-01-18T20:38:03Z | 5 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-18T20:37:46Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: A gentle nudge to complete the healthcare webinar questionnaire sent last
week.
- text: Sudden severe chest pain, suspecting a cardiac emergency.
- text: Annual physical examination due in Tuesday, March 05. Please book an appointment.
- text: Please confirm your attendance at the lifestyle next month.
- text: Could you verify your emergency contact details in our records?
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.85
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'Rapid onset of confusion and weakness, urgent evaluation needed.'</li><li>'Unconscious patient found, immediate medical response required.'</li><li>'Urgent: Suspected heart attack, immediate medical attention required.'</li></ul> |
| 1 | <ul><li>'Reminder: Your dental check-up is scheduled for Monday, February 05.'</li><li>'Reminder: Your dental check-up is scheduled for Saturday, February 24.'</li><li>'Nutritionist appointment reminder for Sunday, January 21.'</li></ul> |
| 0 | <ul><li>'Could you verify your lifestyle contact details in our records?'</li><li>'Kindly update your emergency contact list at your earliest convenience.'</li><li>'We request you to update your wellness information for our records.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.85 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("konsman/setfit-messages-generated-test")
# Run inference
preds = model("Sudden severe chest pain, suspecting a cardiac emergency.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 7 | 10.125 | 12 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 16 |
| 1 | 16 |
| 2 | 16 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 40
- body_learning_rate: (2.2041595048800003e-05, 2.2041595048800003e-05)
- head_learning_rate: 2.2041595048800003e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0021 | 1 | 0.2841 | - |
| 0.1042 | 50 | 0.0603 | - |
| 0.2083 | 100 | 0.0017 | - |
| 0.3125 | 150 | 0.0003 | - |
| 0.4167 | 200 | 0.0004 | - |
| 0.5208 | 250 | 0.0003 | - |
| 0.625 | 300 | 0.0003 | - |
| 0.7292 | 350 | 0.0002 | - |
| 0.8333 | 400 | 0.0003 | - |
| 0.9375 | 450 | 0.0001 | - |
| 1.0417 | 500 | 0.0002 | - |
| 1.1458 | 550 | 0.0003 | - |
| 1.25 | 600 | 0.0002 | - |
| 1.3542 | 650 | 0.0002 | - |
| 1.4583 | 700 | 0.0001 | - |
| 1.5625 | 750 | 0.0002 | - |
| 1.6667 | 800 | 0.0001 | - |
| 1.7708 | 850 | 0.0001 | - |
| 1.875 | 900 | 0.0001 | - |
| 1.9792 | 950 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
BestPhilForSure/detr-resnet-50_finetuned_cppe5
|
BestPhilForSure
| 2024-01-18T20:17:58Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-01-18T17:59:24Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
neuralmagic/llama-2-7b-chat-marlin
|
neuralmagic
| 2024-01-18T20:08:45Z | 2,323 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"marlin",
"region:us"
] |
text-generation
| 2024-01-18T19:36:20Z |
---
license: llama2
language:
- en
library_name: transformers
---
## llama-2-7b-chat-marlin
Example of converting a GPTQ model to Marlin format for fast batched decoding with [Marlin Kernels](https://github.com/IST-DASLab/marlin)
### Install Marlin
```bash
pip install torch
git clone https://github.com/IST-DASLab/marlin.git
cd marlin
pip install -e .
```
### Convert Model
Convert the model from GPTQ to Marlin format. Note that this requires:
- `sym=true`
- `group_size=128`
- `desc_activations=false`
```bash
pip install -U transformers accelerate auto-gptq optimum
```
Convert with the `convert.py` script in this repo:
```bash
python3 convert.py --model-id "TheBloke/Llama-2-7B-Chat-GPTQ" --save-path "./marlin-model" --do-generation
```
### Run Model
Load with the `load.load_model` utility from this repo and run inference as usual.
```python
from load import load_model
from transformers import AutoTokenizer
# Load model from disk.
model_path = "./marlin-model"
model = load_model(model_path).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Generate text.
inputs = tokenizer("My favorite song is", return_tensors="pt")
inputs = {k: v.to("cuda") for k, v in inputs.items()}
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=False)
print(tokenizer.batch_decode(outputs)[0])
```
|
Weni/WeniGPT-2.1.1-zephyr-7b-beta-BitsandBytes-LLM-Base-1.0.1-6k_evol_complexity_no_tags
|
Weni
| 2024-01-18T20:05:24Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-01-18T19:34:57Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: WeniGPT-2.1.1-zephyr-7b-beta-BitsandBytes-LLM-Base-1.0.1-6k_evol_complexity_no_tags
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WeniGPT-2.1.1-zephyr-7b-beta-BitsandBytes-LLM-Base-1.0.1-6k_evol_complexity_no_tags
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 42
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.99 | 33 | 0.4729 |
| No log | 1.25 | 42 | 0.4708 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mar-yam1497/mistralai-Code-Instruct-hotpot_v2
|
mar-yam1497
| 2024-01-18T20:04:25Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"question-answering",
"en",
"dataset:mar-yam1497/HotPotQA_Mistral_Instruct_dataset_Top3k_Revised",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] |
question-answering
| 2023-12-31T18:42:33Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- mar-yam1497/HotPotQA_Mistral_Instruct_dataset_Top3k_Revised
language:
- en
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Trained as a component of a QnA System. Extracts relevant context to answer a given question from large text stream.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
maya999/GPTQ_MISTRAL_finetuned_on_spider_data
|
maya999
| 2024-01-18T20:00:24Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T08:49:31Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: GPTQ_MISTRAL_finetuned_on_spider_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPTQ_MISTRAL_finetuned_on_spider_data
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 2500
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ryusangwon/8139_Llama-2-7b-hf
|
ryusangwon
| 2024-01-18T19:42:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-18T19:42:35Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: 8139_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 8139_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sefa23/normalization-bert
|
sefa23
| 2024-01-18T19:38:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-18T18:33:26Z |
---
widget:
- text: eve varınca beni [MASK].
example_title: Örnek 1
- text: bugün okulda [MASK] gördüm.
example_title: Örnek 2
license: mit
language:
- tr
pipeline_tag: fill-mask
---
|
bartowski/dolphin-2.6-mistral-7b-dpo-exl2
|
bartowski
| 2024-01-18T19:25:33Z | 9 | 6 | null |
[
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-12-31T17:23:00Z |
---
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of dolphin-2.6-mistral-7b-dpo
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo
Model Size: 7b
| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------- | ------------ |
| [8_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/Bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
<a href="https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/dolphin-2.6-mistral-7b-dpo-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphin-2.6-mistral-7b-dpo-exl2`:
```shell
mkdir dolphin-2.6-mistral-7b-dpo-exl2
huggingface-cli download bartowski/dolphin-2.6-mistral-7b-dpo-exl2 --local-dir dolphin-2.6-mistral-7b-dpo-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir dolphin-2.6-mistral-7b-dpo-exl2-6_5
huggingface-cli download bartowski/dolphin-2.6-mistral-7b-dpo-exl2 --revision 6_5 --local-dir dolphin-2.6-mistral-7b-dpo-exl2-6_5 --local-dir-use-symlinks False
```
|
ashishkgpian/full_v3_astromistral_final
|
ashishkgpian
| 2024-01-18T19:23:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"astronomy",
"text2text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text2text-generation
| 2024-01-18T18:50:17Z |
---
library_name: transformers
tags:
- astronomy
license: mit
language:
- en
pipeline_tag: text2text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2
|
bartowski
| 2024-01-18T19:21:14Z | 0 | 0 | null |
[
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-13T03:56:42Z |
---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- LDJnr/Capybara
language:
- en
license: apache-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of UNA-dolphin-2.6-mistral-7b-dpo-laser
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
Model Size: 7b
| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/Bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2/tree/3_5) | 3.5 | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2`:
```shell
mkdir UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2
huggingface-cli download bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2 --local-dir UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2-6_5
huggingface-cli download bartowski/UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2 --revision 6_5 --local-dir UNA-dolphin-2.6-mistral-7b-dpo-laser-exl2-6_5 --local-dir-use-symlinks False
```
|
bartowski/internlm2-chat-7b-sft-llama-exl2
|
bartowski
| 2024-01-18T19:19:32Z | 4 | 1 | null |
[
"text-generation",
"region:us"
] |
text-generation
| 2024-01-18T04:07:59Z |
---
pipeline_tag: text-generation
quantized_by: bartowski
---
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
## Exllama v2 Quantizations of internlm2-chat-7b-sft-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/internlm/internlm2-chat-7b-sft
Model Size: 7b
| Branch | Bits | lm_head bits | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-chat-7b-sft-llama-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-chat-7b-sft-llama-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-chat-7b-sft-llama-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_0](https://huggingface.co/Bartowski/internlm2-chat-7b-sft-llama-exl2/tree/4_0) | 4.0 | 6.0 | Default | 6.5 GB | Just under GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/internlm2-chat-7b-sft-llama-exl2/tree/3_5) | 3.5 | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-chat-7b-sft-llama-exl2`:
```shell
mkdir internlm2-chat-7b-sft-llama-exl2
huggingface-cli download bartowski/internlm2-chat-7b-sft-llama-exl2 --local-dir internlm2-chat-7b-sft-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir internlm2-chat-7b-sft-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-chat-7b-sft-llama-exl2 --revision 6_5 --local-dir internlm2-chat-7b-sft-llama-exl2-6_5 --local-dir-use-symlinks False
```
|
gonmadri/ancelotti
|
gonmadri
| 2024-01-18T19:17:58Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-18T19:11:54Z |
---
license: other
license_name: ancelotti
license_link: LICENSE
---
|
thrunlab/Mistral-7B-v0.1_cola_original2
|
thrunlab
| 2024-01-18T19:17:21Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T17:15:45Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
metrics:
- accuracy
- matthews_correlation
model-index:
- name: Mistral-7B-v0.1_cola_original2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1_cola_original2
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3773
- Accuracy: {'accuracy': 0.8539719626168224}
- Matthews Correlation: 0.6454
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 2
- distributed_type: multi-GPU
- num_devices: 6
- total_train_batch_size: 384
- total_eval_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 750
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------:|
| 0.4986 | 2.38 | 50 | 0.5042 | {'accuracy': 0.7938638542665388} | 0.4836 |
| 0.3193 | 4.76 | 100 | 0.4002 | {'accuracy': 0.8264621284755513} | 0.6029 |
| 0.2489 | 7.14 | 150 | 0.3795 | {'accuracy': 0.8389261744966443} | 0.6123 |
| 0.1258 | 9.52 | 200 | 0.4322 | {'accuracy': 0.8418024928092043} | 0.6284 |
| 0.0625 | 11.9 | 250 | 0.5921 | {'accuracy': 0.8427612655800575} | 0.6258 |
| 0.0251 | 14.29 | 300 | 0.8451 | {'accuracy': 0.8446788111217641} | 0.6248 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
C-Stuti/output
|
C-Stuti
| 2024-01-18T19:07:32Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T13:09:36Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7643
- Accuracy: 0.8686
- Precision: 0.8681
- Recall: 0.8686
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2969 | 1.0 | 505 | 0.6707 | 0.8376 | 0.8375 | 0.8376 | 0.8320 |
| 0.2567 | 2.0 | 1010 | 0.6184 | 0.8572 | 0.8516 | 0.8572 | 0.8519 |
| 0.1496 | 3.0 | 1515 | 0.6471 | 0.8693 | 0.8637 | 0.8693 | 0.8651 |
| 0.0826 | 4.0 | 2020 | 0.6897 | 0.8641 | 0.8600 | 0.8641 | 0.8604 |
| 0.0467 | 5.0 | 2525 | 0.7378 | 0.8676 | 0.8671 | 0.8676 | 0.8663 |
| 0.0229 | 6.0 | 3030 | 0.7521 | 0.8678 | 0.8670 | 0.8678 | 0.8666 |
| 0.01 | 7.0 | 3535 | 0.7643 | 0.8686 | 0.8681 | 0.8686 | 0.8673 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
bartowski/internlm2-7b-llama-exl2
|
bartowski
| 2024-01-18T18:58:45Z | 2 | 1 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-01-18T18:21:58Z |
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
## Exllama v2 Quantizations of internlm2-7b-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/internlm/internlm2-7b
Model Size: 7b
| Branch | Bits | lm_head | Dataset | Size | Description |
| ----- | ---- | ------- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/8_0) | 8.0 | 8.0 | Default | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/6_5) | 6.5 | 8.0 | Default | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/5_0) | 5.0 | 6.0 | Default | 7.4 GB | Slightly lower perplexity vs 6.5. |
| [4_25](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/4_25) | 4.25 | 6.0 | Default | 6.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/internlm2-7b-llama-exl2/tree/3_5) | 3.5 | 6.0 | Default | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-7b-llama-exl2 internlm2-7b-llama-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-7b-llama-exl2`:
```shell
mkdir internlm2-7b-llama-exl2
huggingface-cli download bartowski/internlm2-7b-llama-exl2 --local-dir internlm2-7b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir internlm2-7b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-7b-llama-exl2 --revision 6_5 --local-dir internlm2-7b-llama-exl2-6_5 --local-dir-use-symlinks False
```
|
paapoh/ppo-LunarLander-v2
|
paapoh
| 2024-01-18T18:41:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T18:41:11Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.44 +/- 18.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tarssio/modelo_setfit_politica_BA
|
Tarssio
| 2024-01-18T18:37:08Z | 7 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-18T18:36:26Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: Vem pra Irenil em Paratinga, bonitão
- text: Salve Salve Senhor Governador JERÔNIMO RODRIGUES olhando para as TRADIÇÕES
- text: Parabéns meu Governador! O foguete 🚀 não para . Muitas realizações entregue
em 7 meses , muito trabalho .
- text: 👏👏👏
- text: Bom demais governador sobre o piso da enfermagem o que o senhor diz para nos
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9042553191489362
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Positive | <ul><li>'Enfim,Bonfim 🥳🥳🥳🥳🥳'</li><li>'👏👏👏👏'</li><li>'Pequenas ações fazem sonhos realidades #OhBrabo 💙💙💙'</li></ul> |
| Negative | <ul><li>'@jeronimorodriguesba quando terá uma segunda convocação do concurso SECBA?'</li><li>'Cadê a MP do piso da enfermagem ministro'</li><li>'Sim !! A escola municipal aqui do bairro liberdade,30 crianças esperando até hoje as profissionais ADI para crianças que necessita acompanhamento..'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9043 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Tarssio/modelo_setfit_politica_BA")
# Run inference
preds = model("👏👏👏")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 19.4813 | 313 |
| Label | Training Sample Count |
|:---------|:----------------------|
| Negative | 175 |
| Positive | 199 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0011 | 1 | 0.3616 | - |
| 0.0535 | 50 | 0.3129 | - |
| 0.1070 | 100 | 0.2912 | - |
| 0.1604 | 150 | 0.191 | - |
| 0.2139 | 200 | 0.0907 | - |
| 0.2674 | 250 | 0.0086 | - |
| 0.3209 | 300 | 0.0042 | - |
| 0.3743 | 350 | 0.0161 | - |
| 0.4278 | 400 | 0.0007 | - |
| 0.4813 | 450 | 0.0403 | - |
| 0.5348 | 500 | 0.0055 | - |
| 0.5882 | 550 | 0.0057 | - |
| 0.6417 | 600 | 0.0002 | - |
| 0.6952 | 650 | 0.0002 | - |
| 0.7487 | 700 | 0.0 | - |
| 0.8021 | 750 | 0.0026 | - |
| 0.8556 | 800 | 0.0002 | - |
| 0.9091 | 850 | 0.0002 | - |
| 0.9626 | 900 | 0.0004 | - |
| 1.0 | 935 | - | 0.1724 |
| 1.0160 | 950 | 0.0001 | - |
| 1.0695 | 1000 | 0.0006 | - |
| 1.1230 | 1050 | 0.0001 | - |
| 1.1765 | 1100 | 0.0008 | - |
| 1.2299 | 1150 | 0.0002 | - |
| 1.2834 | 1200 | 0.0001 | - |
| 1.3369 | 1250 | 0.0002 | - |
| 1.3904 | 1300 | 0.0002 | - |
| 1.4439 | 1350 | 0.0002 | - |
| 1.4973 | 1400 | 0.0002 | - |
| 1.5508 | 1450 | 0.0 | - |
| 1.6043 | 1500 | 0.0002 | - |
| 1.6578 | 1550 | 0.2178 | - |
| 1.7112 | 1600 | 0.0002 | - |
| 1.7647 | 1650 | 0.0001 | - |
| 1.8182 | 1700 | 0.0001 | - |
| 1.8717 | 1750 | 0.0003 | - |
| 1.9251 | 1800 | 0.0359 | - |
| 1.9786 | 1850 | 0.0001 | - |
| 2.0 | 1870 | - | 0.1601 |
| 2.0321 | 1900 | 0.0001 | - |
| 2.0856 | 1950 | 0.0002 | - |
| 2.1390 | 2000 | 0.0001 | - |
| 2.1925 | 2050 | 0.0001 | - |
| 2.2460 | 2100 | 0.0002 | - |
| 2.2995 | 2150 | 0.0002 | - |
| 2.3529 | 2200 | 0.0003 | - |
| 2.4064 | 2250 | 0.0001 | - |
| 2.4599 | 2300 | 0.0002 | - |
| 2.5134 | 2350 | 0.0001 | - |
| 2.5668 | 2400 | 0.0 | - |
| 2.6203 | 2450 | 0.0001 | - |
| 2.6738 | 2500 | 0.0 | - |
| 2.7273 | 2550 | 0.0001 | - |
| 2.7807 | 2600 | 0.0001 | - |
| 2.8342 | 2650 | 0.0 | - |
| 2.8877 | 2700 | 0.0 | - |
| 2.9412 | 2750 | 0.0 | - |
| 2.9947 | 2800 | 0.0001 | - |
| **3.0** | **2805** | **-** | **0.1568** |
| 3.0481 | 2850 | 0.0001 | - |
| 3.1016 | 2900 | 0.0001 | - |
| 3.1551 | 2950 | 0.0001 | - |
| 3.2086 | 3000 | 0.0001 | - |
| 3.2620 | 3050 | 0.0001 | - |
| 3.3155 | 3100 | 0.0045 | - |
| 3.3690 | 3150 | 0.0 | - |
| 3.4225 | 3200 | 0.0001 | - |
| 3.4759 | 3250 | 0.0002 | - |
| 3.5294 | 3300 | 0.0 | - |
| 3.5829 | 3350 | 0.0002 | - |
| 3.6364 | 3400 | 0.0 | - |
| 3.6898 | 3450 | 0.0 | - |
| 3.7433 | 3500 | 0.0002 | - |
| 3.7968 | 3550 | 0.0 | - |
| 3.8503 | 3600 | 0.0 | - |
| 3.9037 | 3650 | 0.0005 | - |
| 3.9572 | 3700 | 0.0001 | - |
| 4.0 | 3740 | - | 0.1574 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
DTG2005/q-FrozenLake-v1-4x4-noSlippery
|
DTG2005
| 2024-01-18T18:36:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T18:27:49Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DTG2005/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-quant-ds-v2
|
nm-testing
| 2024-01-18T18:35:17Z | 2 | 0 |
transformers
|
[
"transformers",
"onnx",
"llama",
"text-generation",
"deepsparse",
"conversational",
"arxiv:2301.00774",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:quantized:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:52:56Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
inference: false
model_type: llama
prompt_template: |
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
quantized_by: mwitiderrick
tags:
- deepsparse
---
## TinyLlama 1.1B Chat 1.0 - DeepSparse
This repo contains model files for [TinyLlama 1.1B Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) optimized for [DeepSparse](https://github.com/neuralmagic/deepsparse), a CPU inference runtime for sparse models.
This model was quantized and pruned with [SparseGPT](https://arxiv.org/abs/2301.00774), using [SparseML](https://github.com/neuralmagic/sparseml).
## Inference
Install [DeepSparse LLM](https://github.com/neuralmagic/deepsparse) for fast inference on CPUs:
```bash
pip install deepsparse-nightly[llm]
```
Run in a [Python pipeline](https://github.com/neuralmagic/deepsparse/blob/main/docs/llms/text-generation-pipeline.md):
```python
from deepsparse import TextGeneration
prompt = "How to make banana bread?"
formatted_prompt = f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"
model = TextGeneration(model_path="hf:nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned50-quant-ds-v2")
print(model(formatted_prompt, max_new_tokens=200).generations[0].text)
"""
Sure! Here's a recipe for making banana bread:
Ingredients:
- 1 banana
- 1 cup of all-purpose flour
- 1 cup of cocoa powder
- 1 cup of sugar
- 1 cup of melted coconut oil
- 1 cup of salt
Instructions:
1. Preheat the oven to 375°F.
2. Add the banana to the flour mixture, and mix until smooth.
3. Add the cocoa powder, sugar, melted coconut oil, salt, and mix until smooth.
4. Add the melted coconut oil, salt, and mix until smooth.
5. Add the melted coconut oil, salt, and mix until smooth.
6. Add the banana, salt, and mix until smooth.
"""
```
## Prompt template
```
<|im_start|>user\n
{prompt}<|im_end|>\n
<|im_start|>assistant\n
```
## Sparsification
For details on how this model was sparsified, see the `recipe.yaml` in this repo and follow the instructions below.
```bash
git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"
python sparseml/src/sparseml/transformers/sparsification/obcq/obcq.py TinyLlama/TinyLlama-1.1B-Chat-v1.0 open_platypus --precision float32 --recipe recipe.yaml --save True
python sparseml/src/sparseml/transformers/sparsification/obcq/export.py --task text-generation --model_path obcq_deployment
cp deployment/model.onnx deployment/model-orig.onnx
```
Run this kv-cache injection to speed up the model at inference by caching the Key and Value states:
```python
import os
import onnx
from sparseml.exporters.kv_cache_injector import KeyValueCacheInjector
input_file = "deployment/model-orig.onnx"
output_file = "deployment/model.onnx"
model = onnx.load(input_file, load_external_data=False)
model = KeyValueCacheInjector(model_path=os.path.dirname(input_file)).apply(model)
onnx.save(model, output_file)
print(f"Modified model saved to: {output_file}")
```
Follow the instructions on our [One Shot With SparseML](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/transformers/sparsification/obcq) page for a step-by-step guide for performing one-shot quantization of large language models.
## Slack
For further support, and discussions on these models and AI in general, join [Neural Magic's Slack Community](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-q1a1cnvo-YBoICSIw3L1dmQpjBeDurQ)
|
Deepakkori45/Aspect_3
|
Deepakkori45
| 2024-01-18T18:35:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-18T18:35:00Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
climate-nlp/longformer-base-4096-1-detect-evidence
|
climate-nlp
| 2024-01-18T18:27:51Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T20:15:46Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
climate-nlp/longformer-large-4096-1-detect-evidence
|
climate-nlp
| 2024-01-18T18:27:41Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T20:27:03Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
climate-nlp/longformer-large-4096-2-classify-query
|
climate-nlp
| 2024-01-18T18:27:29Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T21:13:08Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
climate-nlp/longformer-large-4096-3-classify-stance
|
climate-nlp
| 2024-01-18T18:27:20Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T21:17:20Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
climate-nlp/longformer-base-4096-2-classify-query
|
climate-nlp
| 2024-01-18T18:27:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T23:32:55Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
climate-nlp/longformer-base-4096-3-classify-stance
|
climate-nlp
| 2024-01-18T18:25:25Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"climate",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T23:34:47Z |
---
license: other
language:
- en
tags:
- climate
---
This model is to reproduce our paper 'An NLP Benchmark Dataset for Assessing Corporate Climate Policy Engagement' at NeurIPS 2023 Datasets and Benchmarks Track.
For the usage of this model, see https://github.com/climate-nlp/climate-policy-nlp .
This model may be used for research purposes only. Commercial use or redistribution is prohibited.
We use third party contents for fine-tuning.
For more detail and the usage limitation, see https://climate-nlp.github.io/.
|
rbgo/Super-phi-2-dpo
|
rbgo
| 2024-01-18T18:23:14Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"finetune",
"rl",
"dpo",
"nlp",
"custom_code",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-18T16:50:31Z |
---
base_model: microsoft/phi-2
inference: false
language:
- en
license: mit
model-index:
- name: phi-2
results: []
model_creator: microsoft
model_name: phi-2
model_type: phi
prompt_template: |
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
finetuned_by: Inferless
tags:
- finetune
- rl
- dpo
- phi
- nlp
pipeline_tag: text-generation
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://pbs.twimg.com/profile_banners/1633782755669708804/1678359514/1500x500" alt="Inferless" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">Serverless GPUs to scale your machine learning inference without any hassle of managing servers, deploy complicated and custom models with ease.</p>
</div>
<!-- <div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div> -->
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;"><a href="https://0ooatrmbp25.typeform.com/to/nzuhQtba"><b>Join Private Beta</b></a></p></div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Go through <a href="https://tutorials.inferless.com/deploy-deci-7b-using-inferless">this tutorial</a>, for quickly deploy of <b>Phi-2</b> using Inferless</p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
#
- Model creator: [microsoft](https://huggingface.co/microsoft)
- Original model: [phi-2](https://huggingface.co/microsoft/phi-2)
<!-- description start -->
## Description
This repo contains DPO Finetuned model files for [Microsoft Phi-2](https://huggingface.co/microsoft/phi-2).
|
TunahanGokcimen/Question-Answering-Albert-base
|
TunahanGokcimen
| 2024-01-18T18:17:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"albert",
"question-answering",
"generated_from_trainer",
"base_model:albert/albert-base-v1",
"base_model:finetune:albert/albert-base-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-18T16:54:42Z |
---
license: apache-2.0
base_model: albert-base-v1
tags:
- generated_from_trainer
model-index:
- name: Question-Answering-Albert-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Question-Answering-Albert-base
This model is a fine-tuned version of [albert-base-v1](https://huggingface.co/albert-base-v1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ggaleano/bert-base-banking77-pt2
|
ggaleano
| 2024-01-18T18:17:29Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T13:20:35Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-banking77-pt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-banking77-pt2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3035
- F1: 0.9283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1734 | 1.0 | 626 | 0.8356 | 0.8360 |
| 0.401 | 2.0 | 1252 | 0.3851 | 0.9170 |
| 0.1817 | 3.0 | 1878 | 0.3035 | 0.9283 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
rajora/distilbert-multilingual-sentiment
|
rajora
| 2024-01-18T18:12:07Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T12:51:06Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
model-index:
- name: distilbert-multilingual-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-multilingual-sentiment
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3619
- eval_accuracy: 0.7435
- eval_runtime: 25.7389
- eval_samples_per_second: 82.715
- eval_steps_per_second: 5.206
- epoch: 6.0
- step: 6390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
detoxil/water
|
detoxil
| 2024-01-18T18:11:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-18T18:04:31Z |
<h1>Detoxil Water: Protege tu Salud con Gotas Antiparasitarias</h1>
Descubre <a href="https://www.mejorvalorados.xyz/IkjS?sub1=hug"><b>Detoxil Water</b></a>, las gotas antiparasitarias diseñadas para proteger y mejorar tu salud. Este producto único está disponible exclusivamente en mejorvalorados.xyz/IkjS, brindándote una solución eficaz y fácil de usar para combatir parásitos y toxinas en tu organismo.
A un precio accesible de 39 EUR, Detoxil Water ofrece una fórmula potente y natural para desintoxicar tu cuerpo. Estas gotas han sido cuidadosamente elaboradas con ingredientes de alta calidad, garantizando no solo la eliminación de parásitos, sino también la mejora general de tu bienestar. Su uso regular puede contribuir significativamente a mejorar tu digestión, aumentar tus niveles de energía y fortalecer tu sistema inmunológico.
Visita <a href="https://www.mejorvalorados.xyz/IkjS?sub1=hug"><b>www.DetoxilWater.es</b></a> y adquiere Detoxil Water hoy mismo. Unas pocas gotas al día pueden marcar una gran diferencia en tu salud y calidad de vida. No esperes más para tomar el control de tu bienestar con este extraordinario producto. ¡Detoxil Water es tu paso hacia una vida más saludable y libre de parásitos!
Web Oficial Aquí >>> <a href="https://www.mejorvalorados.xyz/IkjS?sub1=hug"><b>Dwww.DetoxilWater.es</b></a>
|
InkFozy/Tuguim
|
InkFozy
| 2024-01-18T17:56:34Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"youtuber",
"brasileiro",
"zueira",
"art",
"audio-to-audio",
"pt",
"license:cc",
"region:us"
] |
audio-to-audio
| 2024-01-18T17:54:00Z |
---
license: cc
language:
- pt
metrics:
- accuracy
- character
library_name: adapter-transformers
pipeline_tag: audio-to-audio
tags:
- youtuber
- brasileiro
- zueira
- art
---
|
LoneStriker/internlm2-20b-llama-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T17:54:58Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:48:39Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-20b](https://huggingface.co/internlm/internlm2-20b) converted into Llama-format weights. As with 7B, not 100% sure if it's a correct conversion yet - play with at your own risk.
Subject to internlm's license.
|
Nazaninmnd/DreamBooth_MediumCloseUp
|
Nazaninmnd
| 2024-01-18T17:47:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-11T10:46:46Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a mcu photo of human
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Nazaninmnd/DreamBooth_MediumCloseUp
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a mcu photo of human using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
beothorn/ppo-LunarLander-v2
|
beothorn
| 2024-01-18T17:44:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T17:44:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.74 +/- 16.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yomilimi/gamblingspam-t5
|
yomilimi
| 2024-01-18T17:33:00Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/t5-v1_1-base",
"base_model:finetune:google/t5-v1_1-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T17:30:01Z |
---
license: apache-2.0
base_model: google/t5-v1_1-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gamblingspam-t5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gamblingspam-t5
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6656
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.6675 | 0.935 |
| No log | 2.0 | 100 | 0.6661 | 0.935 |
| No log | 3.0 | 150 | 0.6656 | 0.935 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
rodrisanchez/ppo-LunarLander-v2-rod
|
rodrisanchez
| 2024-01-18T17:31:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T17:30:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 238.19 +/- 39.01
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nornor02/working
|
nornor02
| 2024-01-18T17:30:43Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-12-21T04:04:18Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: working
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# working
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
LoneStriker/internlm2-20b-llama-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T17:27:54Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:23:23Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-20b](https://huggingface.co/internlm/internlm2-20b) converted into Llama-format weights. As with 7B, not 100% sure if it's a correct conversion yet - play with at your own risk.
Subject to internlm's license.
|
lakshay/work-details-peft
|
lakshay
| 2024-01-18T17:21:37Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-18T11:22:33Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/internlm2-20b-llama-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T17:14:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:11:13Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-20b](https://huggingface.co/internlm/internlm2-20b) converted into Llama-format weights. As with 7B, not 100% sure if it's a correct conversion yet - play with at your own risk.
Subject to internlm's license.
|
aserrasastre/Mistral-7B-def
|
aserrasastre
| 2024-01-18T17:05:06Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:00:35Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
bartowski/internlm2-chat-7b-sft-llama
|
bartowski
| 2024-01-18T16:47:52Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T16:45:00Z |
---
pipeline_tag: text-generation
license: other
---
# InternLM
<div align="center">
<img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/>
<div> </div>
<div align="center">
<b><font size="5">InternLM</font></b>
<sup>
<a href="https://internlm.intern-ai.org.cn/">
<i><font size="4">HOT</font></i>
</a>
</sup>
<div> </div>
</div>
[](https://github.com/internLM/OpenCompass/)
[💻Github Repo](https://github.com/InternLM/InternLM)
</div>
## Converted using <a href="https://huggingface.co/chargoddard">Charles Goddard's</a> conversion script to create llama models from internlm
Original REPO link: https://huggingface.co/internlm/internlm2-chat-7b-sft
ExLlamaV2 quants: https://huggingface.co/bartowski/internlm2-chat-7b-sft-llama-exl2
|
daochf/Lora-Meta-Llama2-7b-chat-hf-QandA_2g_v01-r2-v04
|
daochf
| 2024-01-18T16:35:48Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-18T16:35:26Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
MuhammedSaeed/LLMJS
|
MuhammedSaeed
| 2024-01-18T16:28:13Z | 1 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Salesforce/codegen2-1B_P",
"base_model:adapter:Salesforce/codegen2-1B_P",
"region:us"
] | null | 2024-01-18T16:26:33Z |
---
library_name: peft
base_model: Salesforce/codegen2-1B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
bartowski/internlm2-20b-llama-exl2
|
bartowski
| 2024-01-18T16:26:39Z | 3 | 0 | null |
[
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2024-01-18T13:10:23Z |
---
pipeline_tag: text-generation
license: other
quantized_by: bartowski
---
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
## Exllama v2 Quantizations of internlm2-20b-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/internlm/internlm2-20b
<a href="https://huggingface.co/bartowski/internlm2-20b-llama-exl2/tree/6_5">6.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/internlm2-20b-llama-exl2/tree/4_5">4.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/internlm2-20b-llama-exl2/tree/3_5">3.5 bits per weight</a>
<a href="https://huggingface.co/bartowski/internlm2-20b-llama-exl2/tree/3_0">3.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/internlm2-20b-llama-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-20b-llama-exl2`:
```shell
mkdir internlm2-20b-llama-exl2
huggingface-cli download bartowski/internlm2-20b-llama-exl2 --local-dir internlm2-20b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir internlm2-20b-llama-exl2
huggingface-cli download bartowski/internlm2-20b-llama-exl2 --revision 4_0 --local-dir internlm2-20b-llama-exl2 --local-dir-use-symlinks False
```
|
Yamila/DialoGPT-small-Jonesy2-Bot
|
Yamila
| 2024-01-18T16:23:28Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T21:27:09Z |
---
language:
- en
tags:
- conversational
---
|
Nerdofdot/Nerdofdot_distilbert-base-uncased_TM_FTM
|
Nerdofdot
| 2024-01-18T16:14:28Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-18T13:54:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 0.4}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2392,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
LoneStriker/internlm2-7b-llama-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-18T16:11:08Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T16:07:46Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-7b](https://huggingface.co/internlm/internlm2-7b) converted into Llama-format weights. Generates coherent text but not sure it's 100% correct yet - still testing to make sure it hasn't lost any smarts.
|
itsdhanoob/taxi_task
|
itsdhanoob
| 2024-01-18T16:10:32Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T15:57:54Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_task
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="itsdhanoob/taxi_task", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abragin/dqn-SpaceInvadersNoFrameskip-v4
|
abragin
| 2024-01-18T16:06:16Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T16:05:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 593.50 +/- 134.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abragin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga abragin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga abragin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
LoneStriker/internlm2-7b-llama-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T16:05:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T16:02:26Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-7b](https://huggingface.co/internlm/internlm2-7b) converted into Llama-format weights. Generates coherent text but not sure it's 100% correct yet - still testing to make sure it hasn't lost any smarts.
|
smutuvi/whisper-small-sw-ndizi-158_2
|
smutuvi
| 2024-01-18T16:02:13Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:pplantinga/whisper-small-sw",
"base_model:adapter:pplantinga/whisper-small-sw",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T13:43:17Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: pplantinga/whisper-small-sw
model-index:
- name: whisper-small-sw-ndizi-158_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sw-ndizi-158_2
This model is a fine-tuned version of [pplantinga/whisper-small-sw](https://huggingface.co/pplantinga/whisper-small-sw) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 2.8008 |
| 2.5386 | 2.0 | 36 | 2.7557 |
| 2.4516 | 3.0 | 54 | 2.6801 |
| 2.4516 | 4.0 | 72 | 2.5842 |
| 2.363 | 5.0 | 90 | 2.5062 |
| 2.2145 | 6.0 | 108 | 2.4334 |
| 2.1358 | 7.0 | 126 | 2.3611 |
| 2.1358 | 8.0 | 144 | 2.2867 |
| 2.059 | 9.0 | 162 | 2.2158 |
| 1.8708 | 10.0 | 180 | 2.1452 |
| 1.8708 | 11.0 | 198 | 2.0739 |
| 1.8568 | 12.0 | 216 | 2.0307 |
| 1.8349 | 13.0 | 234 | 2.0092 |
| 1.7117 | 14.0 | 252 | 1.9914 |
| 1.7117 | 15.0 | 270 | 1.9788 |
| 1.6683 | 16.0 | 288 | 1.9669 |
| 1.6639 | 17.0 | 306 | 1.9548 |
| 1.6639 | 18.0 | 324 | 1.9432 |
| 1.6765 | 19.0 | 342 | 1.9338 |
| 1.6148 | 20.0 | 360 | 1.9265 |
| 1.5875 | 21.0 | 378 | 1.9149 |
| 1.5875 | 22.0 | 396 | 1.9058 |
| 1.5664 | 23.0 | 414 | 1.9013 |
| 1.6485 | 24.0 | 432 | 1.8953 |
| 1.535 | 25.0 | 450 | 1.8917 |
| 1.535 | 26.0 | 468 | 1.8849 |
| 1.5579 | 27.0 | 486 | 1.8783 |
| 1.5141 | 28.0 | 504 | 1.8746 |
| 1.5141 | 29.0 | 522 | 1.8707 |
| 1.5943 | 30.0 | 540 | 1.8663 |
| 1.4296 | 31.0 | 558 | 1.8630 |
| 1.4895 | 32.0 | 576 | 1.8595 |
| 1.4895 | 33.0 | 594 | 1.8536 |
| 1.5366 | 34.0 | 612 | 1.8525 |
| 1.4573 | 35.0 | 630 | 1.8488 |
| 1.4573 | 36.0 | 648 | 1.8489 |
| 1.4729 | 37.0 | 666 | 1.8441 |
| 1.4758 | 38.0 | 684 | 1.8384 |
| 1.4386 | 39.0 | 702 | 1.8391 |
| 1.4386 | 40.0 | 720 | 1.8356 |
| 1.3773 | 41.0 | 738 | 1.8343 |
| 1.4994 | 42.0 | 756 | 1.8328 |
| 1.4994 | 43.0 | 774 | 1.8331 |
| 1.4342 | 44.0 | 792 | 1.8287 |
| 1.4047 | 45.0 | 810 | 1.8283 |
| 1.3758 | 46.0 | 828 | 1.8261 |
| 1.3758 | 47.0 | 846 | 1.8234 |
| 1.3856 | 48.0 | 864 | 1.8201 |
| 1.3815 | 49.0 | 882 | 1.8210 |
| 1.4364 | 50.0 | 900 | 1.8197 |
| 1.4364 | 51.0 | 918 | 1.8188 |
| 1.4035 | 52.0 | 936 | 1.8183 |
| 1.3368 | 53.0 | 954 | 1.8176 |
| 1.3368 | 54.0 | 972 | 1.8155 |
| 1.424 | 55.0 | 990 | 1.8158 |
| 1.3782 | 56.0 | 1008 | 1.8159 |
| 1.3057 | 57.0 | 1026 | 1.8122 |
| 1.3057 | 58.0 | 1044 | 1.8136 |
| 1.3615 | 59.0 | 1062 | 1.8142 |
| 1.4013 | 60.0 | 1080 | 1.8091 |
| 1.4013 | 61.0 | 1098 | 1.8099 |
| 1.2894 | 62.0 | 1116 | 1.8102 |
| 1.3972 | 63.0 | 1134 | 1.8089 |
| 1.3564 | 64.0 | 1152 | 1.8096 |
| 1.3564 | 65.0 | 1170 | 1.8075 |
| 1.2808 | 66.0 | 1188 | 1.8078 |
| 1.3871 | 67.0 | 1206 | 1.8061 |
| 1.3871 | 68.0 | 1224 | 1.8060 |
| 1.267 | 69.0 | 1242 | 1.8070 |
| 1.2978 | 70.0 | 1260 | 1.8046 |
| 1.3657 | 71.0 | 1278 | 1.8062 |
| 1.3657 | 72.0 | 1296 | 1.8061 |
| 1.342 | 73.0 | 1314 | 1.8066 |
| 1.2504 | 74.0 | 1332 | 1.8063 |
| 1.3003 | 75.0 | 1350 | 1.8031 |
| 1.3003 | 76.0 | 1368 | 1.8053 |
| 1.2927 | 77.0 | 1386 | 1.8057 |
| 1.2653 | 78.0 | 1404 | 1.8032 |
| 1.2653 | 79.0 | 1422 | 1.8031 |
| 1.3574 | 80.0 | 1440 | 1.8036 |
| 1.2253 | 81.0 | 1458 | 1.8061 |
| 1.3348 | 82.0 | 1476 | 1.8036 |
| 1.3348 | 83.0 | 1494 | 1.8034 |
| 1.2846 | 84.0 | 1512 | 1.8033 |
| 1.2671 | 85.0 | 1530 | 1.8032 |
| 1.2671 | 86.0 | 1548 | 1.8038 |
| 1.3102 | 87.0 | 1566 | 1.8031 |
| 1.2603 | 88.0 | 1584 | 1.8011 |
| 1.286 | 89.0 | 1602 | 1.8029 |
| 1.286 | 90.0 | 1620 | 1.8026 |
| 1.2761 | 91.0 | 1638 | 1.8029 |
| 1.2416 | 92.0 | 1656 | 1.8014 |
| 1.2416 | 93.0 | 1674 | 1.8035 |
| 1.2798 | 94.0 | 1692 | 1.8008 |
| 1.3043 | 95.0 | 1710 | 1.8009 |
| 1.2969 | 96.0 | 1728 | 1.8004 |
| 1.2969 | 97.0 | 1746 | 1.8014 |
| 1.3087 | 98.0 | 1764 | 1.7992 |
| 1.2364 | 99.0 | 1782 | 1.7997 |
| 1.2748 | 100.0 | 1800 | 1.7989 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/internlm2-7b-llama-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T15:59:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:57:01Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-7b](https://huggingface.co/internlm/internlm2-7b) converted into Llama-format weights. Generates coherent text but not sure it's 100% correct yet - still testing to make sure it hasn't lost any smarts.
|
aserrasastre/Mistral-7B-Instruct-v8
|
aserrasastre
| 2024-01-18T15:56:00Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:51:37Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
LoneStriker/internlm2-7b-llama-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-18T15:53:39Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"base_model:internlm/internlm2-7b",
"base_model:finetune:internlm/internlm2-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:51:37Z |
---
license: other
language:
- en
- zh
base_model: internlm/internlm2-7b
---
[internlm2-7b](https://huggingface.co/internlm/internlm2-7b) converted into Llama-format weights. Generates coherent text but not sure it's 100% correct yet - still testing to make sure it hasn't lost any smarts.
|
yomilimi/gamblingspam-bert
|
yomilimi
| 2024-01-18T15:38:40Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T15:37:44Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gamblingspam
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gamblingspam
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2234
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.2367 | 0.935 |
| No log | 2.0 | 100 | 0.2234 | 0.935 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Simple-Learner/phi-2-finetuned-gsm8k
|
Simple-Learner
| 2024-01-18T15:34:02Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"phi",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-18T02:00:16Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-finetuned-gsm8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-finetuned-gsm8k
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
macdog101/poca-SoccerTwos-v6
|
macdog101
| 2024-01-18T15:24:17Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-18T15:23:29Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: macdog101/poca-SoccerTwos-v6
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
TheNewPing/mega-base-fp16-increased-bs
|
TheNewPing
| 2024-01-18T15:23:37Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mega",
"multiple-choice",
"generated_from_trainer",
"base_model:mnaylor/mega-base-wikitext",
"base_model:finetune:mnaylor/mega-base-wikitext",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-01-18T15:22:18Z |
---
license: apache-2.0
base_model: mnaylor/mega-base-wikitext
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: mega-base-fp16-increased-bs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mega-base-fp16-increased-bs
This model is a fine-tuned version of [mnaylor/mega-base-wikitext](https://huggingface.co/mnaylor/mega-base-wikitext) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.4957
- Precision: 0.4961
- Recall: 0.5469
- F1: 0.5203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1024
- eval_batch_size: 1024
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 34 | 0.6932 | 0.4990 | 0.4991 | 0.5469 | 0.5219 |
| No log | 2.0 | 68 | 0.6932 | 0.4959 | 0.4962 | 0.5455 | 0.5197 |
| No log | 3.0 | 102 | 0.6932 | 0.4957 | 0.4961 | 0.5469 | 0.5203 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
malomn/lebenchmark-phoneme-classification
|
malomn
| 2024-01-18T15:19:37Z | 2 | 0 |
transformers
|
[
"transformers",
"wav2vec2",
"feature-extraction",
"fr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-01-17T16:20:49Z |
---
license: apache-2.0
language:
- fr
---
|
TheNewPing/roberta-base
|
TheNewPing
| 2024-01-18T15:12:45Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-01-18T15:02:34Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3752
- Accuracy: 0.8436
- Precision: 0.8472
- Recall: 0.8383
- F1: 0.8427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5737 | 1.0 | 1074 | 0.4409 | 0.8018 | 0.8208 | 0.7723 | 0.7958 |
| 0.3689 | 2.0 | 2148 | 0.3821 | 0.8304 | 0.8398 | 0.8165 | 0.8280 |
| 0.3038 | 3.0 | 3222 | 0.3752 | 0.8436 | 0.8472 | 0.8383 | 0.8427 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CiaraRowles/temporal-controlnet-lineart-svd-v1
|
CiaraRowles
| 2024-01-18T15:09:03Z | 0 | 26 |
diffusers
|
[
"diffusers",
"safetensors",
"license:unknown",
"region:us"
] | null | 2024-01-18T14:53:20Z |
---
license: unknown
---
# Stable Video Diffusion Temporal Controlnet
## Overview
Introducing the Stable Video Diffusion Temporal Controlnet! This tool uses a controlnet style encoder with the svd base. It's designed to enhance your video diffusion projects by providing precise temporal control.
## Setup
- **Controlnet Model:** download the inference repo from here: https://github.com/CiaraStrawberry/sdv_controlnet
- **Installation:** run `pip install -r requirements.txt`
- **Execution:** Run "run_inference.py".
## Demo
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/63357214eb6132ca653020e7/RkjfJ8IKuZA-tYa-XS99y.mp4"></video>
## Notes
- **Focus on Central Object:** The system tends to extract motion features primarily from a central object and, occasionally, from the background. It's best to avoid overly complex motion or obscure objects.
- **Simplicity in Motion:** Stick to motions that svd can handle well without the controlnet. This ensures it will be able to apply the motion.
## Acknowledgements
- **Diffusers Team:** For the svd implementation.
- **Pixeli99:** For providing a practical svd training script: [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend)
|
elliotthwang/KimLam_phi-2-zh.v1
|
elliotthwang
| 2024-01-18T15:06:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:03:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seatond/multi_yes_short
|
seatond
| 2024-01-18T14:55:25Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"region:us"
] | null | 2024-01-18T14:28:22Z |
---
library_name: peft
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
VAST-AI/TriplaneGaussian
|
VAST-AI
| 2024-01-18T14:53:44Z | 0 | 81 | null |
[
"image-to-3d",
"arxiv:2312.09147",
"license:apache-2.0",
"region:us"
] |
image-to-3d
| 2024-01-17T03:44:21Z |
---
license: apache-2.0
pipeline_tag: image-to-3d
---
# TriplaneGuassian Model Card
<div align="center">
[**Project Page**](https://zouzx.github.io/TriplaneGaussian/) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2312.09147) **|** [**Code**](https://github.com/VAST-AI-Research/TriplaneGaussian) **|** [**Gradio demo**](https://huggingface.co/spaces/VAST-AI/TriplaneGaussian)
</div>
## Introduction
TGS enables fast reconstruction from single-view image in a few seconds based on a hybrid Triplane-Gaussian 3D representation.
## Examples
### Results on Images Generated by [Midjourney](https://www.midjourney.com/)
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/644dbf6453ad80c6593bf748/BcJp8alZRXAIdPmfbVGdx.qt"></video>
### Results on Captured Real-world Images
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/644dbf6453ad80c6593bf748/bgAxqUQpnisQAmsGZ9Q_0.qt"></video>
## Model Details
The model `model_lvis_rel.ckpt` is trained on Objaverse-LVIS dataset, which only includes ~45K synthetic objects.
## Usage
You can directly download the model in this repository or employ the model in python script by:
```python
from huggingface_hub import hf_hub_download
MODEL_CKPT_PATH = hf_hub_download(repo_id="VAST-AI/TriplaneGaussian", filename="model_lvis_rel.ckpt", repo_type="model")
```
More details can be found in our [Github repository](https://github.com/VAST-AI-Research/TriplaneGaussian).
## Citation
If you find this work helpful, please consider citing our paper:
```bibtex
@article{zou2023triplane,
title={Triplane Meets Gaussian Splatting: Fast and Generalizable Single-View 3D Reconstruction with Transformers},
author={Zou, Zi-Xin and Yu, Zhipeng and Guo, Yuan-Chen and Li, Yangguang and Liang, Ding and Cao, Yan-Pei and Zhang, Song-Hai},
journal={arXiv preprint arXiv:2312.09147},
year={2023}
}
```
|
StefKoople/test_rl
|
StefKoople
| 2024-01-18T14:44:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-18T14:44:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.60 +/- 17.57
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LarryAIDraw/LoRA__mhy_genshin_Shenhe_v1_0
|
LarryAIDraw
| 2024-01-18T14:34:11Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-18T14:27:31Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/268208/shenheoror-genshin-lora
|
LarryAIDraw/aoandon-nvwls-v1
|
LarryAIDraw
| 2024-01-18T14:33:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-18T14:24:49Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/268684/aoandon-onmyoji-lora
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.