modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 06:30:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 06:29:58
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EmbeddedLLM/bge-base-en-v1.5-onnx-o4-o2-gpu
|
EmbeddedLLM
| 2024-02-19T06:44:49Z | 17 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-16T02:53:48Z |
---
pipeline_tag: feature-extraction
tags:
- feature-extraction
- sentence-similarity
language: en
license: mit
---
# ONNX Conversion of [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- ONNX model for GPU with O4-O2 optimisation
- We exported the model with `use_raw_attention_mask=True` [due to this issue](https://github.com/microsoft/onnxruntime/issues/18945)
## Usage
```python
import torch.nn.functional as F
from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
sentences = [
"The llama (/ˈlɑːmə/) (Lama glama) is a domesticated South American camelid.",
"The alpaca (Lama pacos) is a species of South American camelid mammal.",
"The vicuña (Lama vicugna) (/vɪˈkuːnjə/) is one of the two wild South American camelids.",
]
model_name = "EmbeddedLLM/bge-base-en-v1.5-onnx-o4-o2-gpu"
device = "cuda"
provider = "CUDAExecutionProvider"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = ORTModelForFeatureExtraction.from_pretrained(
model_name, use_io_binding=True, provider=provider, device_map=device
)
inputs = tokenizer(
sentences,
padding=True,
truncation=True,
return_tensors="pt",
max_length=model.config.max_position_embeddings,
)
inputs = inputs.to(device)
embeddings = model(**inputs).last_hidden_state[:, 0]
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings.cpu().numpy().shape)
```
|
Doowon96/roberta-base-finetuned-hate_speech
|
Doowon96
| 2024-02-19T06:44:34Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:klue/roberta-base",
"base_model:finetune:klue/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T01:57:12Z |
---
base_model: klue/roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-finetuned-hate_speech
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-hate_speech
This model is a fine-tuned version of [klue/roberta-base](https://huggingface.co/klue/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9118
- F1: 0.5217
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.561667708933033e-06
- train_batch_size: 64
- eval_batch_size: 128
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 87 | 1.0658 | 0.2015 |
| No log | 2.0 | 174 | 1.0056 | 0.3060 |
| No log | 3.0 | 261 | 0.9283 | 0.5110 |
| No log | 4.0 | 348 | 0.9118 | 0.5217 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
km2k/elephants
|
km2k
| 2024-02-19T06:44:18Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-19T06:40:12Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Elephants Dreambooth model trained by km2k following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AEC-730221205015
Sample pictures of this concept:
.jpeg)
.jpg)
.jpeg)
.jpeg)
.jpg)
|
jeiku/Lunar_10.7B_GGUF
|
jeiku
| 2024-02-19T06:43:24Z | 13 | 0 | null |
[
"gguf",
"en",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T05:18:47Z |
---
license: cc-by-nc-sa-4.0
language:
- en
---
This model consists of a finetuned model of my own SLERP merged with this model: https://huggingface.co/Sao10K/Sensualize-Solar-10.7B created by https://huggingface.co/Sao10K

Lunar was produced by a variety of methods for the purpose of being a companion bot capable of intimacy as well as conversation.
FP16 here: https://huggingface.co/jeiku/Lunar_10.7B
|
ggomma/aika-dreambooth-1e-6-400-8f90a45d-49dd-4a8e-9327-69a41688b3ef
|
ggomma
| 2024-02-19T06:38:50Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:KantoRegion/99mix-converted",
"base_model:finetune:KantoRegion/99mix-converted",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-19T06:33:33Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: ggomma/test
instance_prompt: '"An image of Aika person"'
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - ggomma/aika-dreambooth-1e-6-400-8f90a45d-49dd-4a8e-9327-69a41688b3ef
This is a dreambooth model derived from ggomma/test. The weights were trained on "An image of Aika person" using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
sunlight2002/distilbert-base-uncased-finetuned-emotion
|
sunlight2002
| 2024-02-19T06:37:26Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T01:12:49Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9215393761396141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- Accuracy: 0.922
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3208 | 0.9045 | 0.9033 |
| No log | 2.0 | 500 | 0.2183 | 0.922 | 0.9215 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Tongjilibo/simbert-chinese-base
|
Tongjilibo
| 2024-02-19T06:29:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T06:04:26Z |
---
license: apache-2.0
---
## 说明
- config.json用于transformers
- bert4torch_config.json用于bert4torch
## 权重转换
- 此项目是从tf权重转换而来,可直接使用该权重,或下载下述原始tf权重并使用convert.py进行转换
- 源项目:https://github.com/ZhuiyiTechnology/simbert
- 转换脚本: `convert.py`
|
Tongjilibo/simbert-chinese-tiny
|
Tongjilibo
| 2024-02-19T06:29:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T06:13:25Z |
---
license: apache-2.0
---
## 说明
- config.json用于transformers
- bert4torch_config.json用于bert4torch
## 权重转换
- 此项目是从tf权重转换而来,可直接使用该权重,或下载下述原始tf权重并使用convert.py进行转换
- 源项目:https://github.com/ZhuiyiTechnology/simbert
- 转换脚本: `convert.py`
|
JKuang96/rl_course_vizdoom_health_gathering_supreme
|
JKuang96
| 2024-02-19T06:26:07Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T06:08:51Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.89 +/- 5.13
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JKuang96/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
giprime/OOM-SOLAR-10.7B_01
|
giprime
| 2024-02-19T06:14:41Z | 55 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T23:09:43Z |
---
license: apache-2.0
language:
- en
- ko
library_name: transformers
---
Model Architecture
OOM-SOLAR-10.7B_01 is an language model that uses an optimized transformer architecture based on upstage/SOLAR-10.7B-v1.0.
## Model description
Based on "beomi/OPEN-SOLAR-KO-10.7B"
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 24
- gradient_accumulation_steps: 1
- total_train_batch_size:
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
greatakela/mistral_instruct_classifyGR_full
|
greatakela
| 2024-02-19T06:14:40Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T06:04:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QingyunWang/distilbert-base-uncased-finetuned-emotion
|
QingyunWang
| 2024-02-19T06:13:36Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-17T23:56:21Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9225782437110167
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2174
- Accuracy: 0.923
- F1: 0.9226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.818 | 1.0 | 250 | 0.3215 | 0.901 | 0.8999 |
| 0.2514 | 2.0 | 500 | 0.2174 | 0.923 | 0.9226 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
greatakela/mistral_instruct_classifyGR
|
greatakela
| 2024-02-19T06:02:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-19T06:02:11Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral_instruct_classifyGR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_instruct_classifyGR
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4154 | 1.0 | 311 | 1.3697 |
| 1.2982 | 2.0 | 622 | 1.3345 |
| 1.2056 | 3.0 | 933 | 1.3285 |
| 1.1679 | 4.0 | 1244 | 1.3431 |
| 1.0683 | 5.0 | 1555 | 1.3646 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Kathermoitheen/my-pet-cat
|
Kathermoitheen
| 2024-02-19T05:59:56Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-19T05:55:59Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat Dreambooth model trained by Kathermoitheen following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AEC-730221205016
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
sugafree/whisper-medium-hu
|
sugafree
| 2024-02-19T05:54:42Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hu",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-16T06:57:12Z |
---
language:
- hu
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Medium HU
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: hu
split: test
args: hu
metrics:
- name: Wer
type: wer
value: 14.829034193161366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium HU
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2699
- Wer Ortho: 17.1763
- Wer: 14.8290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 20000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:-------:|
| 0.0804 | 1.38 | 2000 | 0.1977 | 19.2869 | 16.6612 |
| 0.038 | 2.76 | 4000 | 0.2028 | 18.2211 | 15.7494 |
| 0.014 | 4.14 | 6000 | 0.2190 | 17.9961 | 15.3466 |
| 0.0107 | 5.51 | 8000 | 0.2328 | 17.3490 | 14.9370 |
| 0.0144 | 6.89 | 10000 | 0.2376 | 17.4153 | 14.9559 |
| 0.0049 | 8.27 | 12000 | 0.2424 | 16.9984 | 14.6953 |
| 0.0071 | 9.65 | 14000 | 0.2594 | 17.6961 | 15.3586 |
| 0.0037 | 11.03 | 16000 | 0.2546 | 17.2007 | 14.8667 |
| 0.0078 | 12.41 | 18000 | 0.2644 | 17.5757 | 15.1495 |
| 0.0043 | 13.78 | 20000 | 0.2699 | 17.1763 | 14.8290 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
pei1111/NeuroSpectra
|
pei1111
| 2024-02-19T05:53:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-19T05:48:25Z |
---
license: apache-2.0
---
Autonomous Driving Guide and Tour Guide:
Design and develop autonomous driving systems, including navigation and route planning features.
Provide passengers with navigation and tourism information, including landmarks, restaurants, etc.
Assist in handling emergencies when needed, such as providing emergency contacts or navigating to the nearest hospital.
Emergency Event Reporter:
Monitor the vehicle's operational status and sensor data to detect any potential emergency events in real-time.
Report emergency events to relevant authorities, providing detailed information about the event and location data.
Traffic Regulations Expert:
Research, analyze, and understand traffic regulations and legal frameworks to ensure compliance with autonomous driving systems.
Provide legal advice and guidance to ensure vehicle operations and activities comply with legal requirements.
Researcher:
Conduct research on autonomous driving technology, traffic regulations, and related fields.
Analyze industry trends and emerging technologies, providing recommendations and solutions.
Technology Enforcement Segment:
Design and deploy technology-based traffic enforcement systems for monitoring traffic violations and law enforcement.
Analyze traffic violation behaviors and accident situations, assisting law enforcement agencies in handling related cases.
In summary, these roles play different roles in the field of autonomous driving, but all are aimed at ensuring the safety, compliance, and efficiency of autonomous driving systems. These roles may require relevant expertise and skills such as autonomous driving technology, traffic regulations, data analysis, etc.
|
whitefox123/whisper-large-ar5
|
whitefox123
| 2024-02-19T05:44:10Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:whitefox123/tashkeel",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-18T10:15:29Z |
---
language:
- ar
license: apache-2.0
base_model: openai/whisper-large-v3
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- whitefox123/tashkeel
metrics:
- wer
model-index:
- name: Whisper large - tuned
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: CLARtts
type: whitefox123/tashkeel
config: default
split: None
args: 'config: ar, split: test'
metrics:
- name: Wer
type: wer
value: 156.86486486486487
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large - tuned
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the CLARtts dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1992
- Wer: 156.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 9375
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0864 | 1.6 | 1000 | 0.1155 | 165.5135 |
| 0.0291 | 3.2 | 2000 | 0.1192 | 268.0360 |
| 0.0196 | 4.8 | 3000 | 0.1317 | 217.9820 |
| 0.0024 | 6.4 | 4000 | 0.1583 | 136.1802 |
| 0.0012 | 8.0 | 5000 | 0.1708 | 136.3604 |
| 0.0004 | 9.6 | 6000 | 0.1841 | 128.7207 |
| 0.0009 | 11.2 | 7000 | 0.1831 | 169.8739 |
| 0.0003 | 12.8 | 8000 | 0.1885 | 158.7387 |
| 0.0001 | 14.4 | 9000 | 0.1992 | 156.8649 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
|
duraad/nep-spell-mt5-small-01
|
duraad
| 2024-02-19T05:37:17Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:duraad/nep-spell-mt5-small-00",
"base_model:finetune:duraad/nep-spell-mt5-small-00",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-19T04:37:57Z |
---
license: apache-2.0
base_model: duraad/nep-spell-mt5-small-00
tags:
- generated_from_trainer
model-index:
- name: nep-spell-mt5-small-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nep-spell-mt5-small-01
This model is a fine-tuned version of [duraad/nep-spell-mt5-small-00](https://huggingface.co/duraad/nep-spell-mt5-small-00) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
CodeChris/AnimagineXL-v3-openvino
|
CodeChris
| 2024-02-19T05:35:44Z | 0 | 0 | null |
[
"text-to-image",
"stable-diffusion",
"safetensors",
"stable-diffusion-xl",
"animagine-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:finetune:cagliostrolab/animagine-xl-3.0",
"region:us"
] |
text-to-image
| 2024-02-18T17:28:55Z |
---
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
- animagine-xl
base_model: cagliostrolab/animagine-xl-3.0
---
# AnimagineXL-v3-openvino
This is an *unofficial* [OpenVINO](https://github.com/openvinotoolkit/openvino) variant of [cagliostrolab/animagine-xl-3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0).
The repo is provided for convenience of running the Animagine XL v3 model on Intel CPU/GPU, as loading & converting a SDXL model to openvino can be pretty slow (dozens of minutes).
Table of contents:
- [Usage](#usage)
- [How the conversion was done](#how-the-conversion-was-done)
- [Appendix](#appendix)
## Usage
Take CPU for example:
```python
from optimum.intel.openvino import OVStableDiffusionXLPipeline
from diffusers import (
EulerAncestralDiscreteScheduler,
DPMSolverMultistepScheduler
)
model_id = "CodeChris/AnimagineXL-v3-openvino"
pipe = OVStableDiffusionXLPipeline.from_pretrained(model_model)
# Fix output image size & batch_size for faster speed
img_w, img_h = 832, 1216 # Example
pipe.reshape(width=img_w, height=img_h,
batch_size=1, num_images_per_prompt=1)
## Change scheduler
# AnimagineXL recommand Euler A:
# pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(
pipe.scheduler.config,
use_karras_sigmas=True,
algorithm_type="dpmsolver++"
) # I prefer DPM++ 2M Karras
# Turn off the filter
pipe.safety_checker = None
# If run on a GPU, you need:
# pipe.to('cuda')
```
After the pipe is prepared, a txt2img task can be executed as below:
```python
prompt = "1girl, dress, day, masterpiece, best quality"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"
images = pipe(
prompt,
negative_prompt,
# If reshaped, image size must equal the reshaped size
width=img_w, height=img_h,
guidance_scale=7,
num_inference_steps=20
)
img = images[0]
img.save('sample.png')
```
For convenience, here is the recommended image sizes from the official AnimagineXL doc:
```
# Or their transpose
896 x 1152
832 x 1216
768 x 1344
640 x 1536
1024 x 1024
```
## How the conversion was done
First, install optimum:
```powershell
pip install --upgrade-strategy eager optimum[openvino,nncf]
```
Then, the repo is converted using the following command:
```powershell
optimum-cli export openvino --model 'cagliostrolab/animagine-xl-3.0' 'models/openvino/AnimagineXL-v3' --task 'stable-diffusion-xl'
```
## Appendix
Push large files **without** git commit the latest changes:
```
git lfs install
huggingface-cli lfs-enable-largefiles .
huggingface-cli upload --commit-message 'Upload model files' 'CodeChris/AnimagineXL-v3-openvino' .
```
Other notes:
* The conversion was done using `optimum==1.16.1` and `openvino==2023.2.0`.
* You may query `optimum-cli export openvino --help` for more usage details.
|
likhith231/results
|
likhith231
| 2024-02-19T05:27:43Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-19T05:27:22Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Llama-2-7b-chat-hf
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.1
|
jeiku/Lunar_10.7B
|
jeiku
| 2024-02-19T05:21:44Z | 53 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T04:46:25Z |
---
license: cc-by-nc-sa-4.0
language:
- en
---
This model consists of a finetuned model of my own SLERP merged with this model: https://huggingface.co/Sao10K/Sensualize-Solar-10.7B created by https://huggingface.co/Sao10K

Lunar was produced by a variety of methods for the purpose of being a companion bot capable of intimacy as well as conversation.
GGUF here: https://huggingface.co/jeiku/Lunar_10.7B_GGUF
|
markseo/ppo-Huggy
|
markseo
| 2024-02-19T05:19:19Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-02-19T05:19:06Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: markseo/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ismaeelk15/my-cat
|
ismaeelk15
| 2024-02-19T05:04:13Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-19T04:59:42Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Cat Dreambooth model trained by ismaeelk15 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AEC-730221205013
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
LegoClipStars/Disney_Descendants2_Uma
|
LegoClipStars
| 2024-02-19T04:59:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2024-02-19T04:58:46Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: Daughter of Ursula
output:
url: images/descendants-disney-lol.jpeg
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: Please spare me
license: cc-by-4.0
---
# Disney_Descendants2_Uma
<Gallery />
## Model description
Here's my RVC voice model of Uma from Disney's "Descendants 2"
## Trigger words
You should use `Please spare me` to trigger the image generation.
## Download model
[Download](/LegoClipStars/Disney_Descendants2_Uma/tree/main) them in the Files & versions tab.
|
bnvsyjy/my-cool-model
|
bnvsyjy
| 2024-02-19T04:54:25Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:VAGOsolutions/SauerkrautLM-SOLAR-Instruct",
"base_model:merge:VAGOsolutions/SauerkrautLM-SOLAR-Instruct",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:merge:upstage/SOLAR-10.7B-Instruct-v1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T04:49:47Z |
---
base_model:
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct)
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: upstage/SOLAR-10.7B-Instruct-v1.0
layer_range: [0, 32]
- model: VAGOsolutions/SauerkrautLM-SOLAR-Instruct
layer_range: [0, 32]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Viennes/marian-finetuned-kde4-en-to-fr
|
Viennes
| 2024-02-19T04:41:36Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-18T23:25:00Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.88398487672078
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
lmg-anon/vntl-qwen-14b-v0.1-hf
|
lmg-anon
| 2024-02-19T04:29:44Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v3.1-1k-q",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-19T03:32:08Z |
---
license: other
license_name: qwen
license_link: LICENSE
datasets:
- lmg-anon/VNTL-v3.1-1k-q
language:
- ja
- en
pipeline_tag: translation
---
This is the merge of the [experimental VNTL Qwen14B v0.1 qlora](https://huggingface.co/lmg-anon/vntl-qwen-14b-v0.1-qlora) created using the [VNTL-v3.1-1k-q](https://huggingface.co/datasets/lmg-anon/VNTL-v3.1-1k-q) dataset.
This is a prompt example:
```
<<METADATA>>
[character] Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
[character] Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<START>>
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>>
[Sakuno]: 『... Sorry.』<|endoftext|>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>>
```
The generated translation for that prompt, with temperature 0 and using `load_in_4bit`, is:
```
[Shingo]: 「It's okay, I know it sounds weird to say this, but I'm glad you got lost. You're so cute that I was worried about all sorts of things.」
```
|
ratno/llama-2-7b-chat-1k
|
ratno
| 2024-02-19T04:26:18Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T04:20:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brandolorian/answer-Qwen-stioning
|
brandolorian
| 2024-02-19T04:23:12Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:finetune:Qwen/Qwen1.5-0.5B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T03:47:36Z |
---
license: other
base_model: Qwen/Qwen1.5-0.5B
tags:
- generated_from_trainer
model-index:
- name: answer-Qwen-stioning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# answer-Qwen-stioning
This model is a fine-tuned version of [Qwen/Qwen1.5-0.5B](https://huggingface.co/Qwen/Qwen1.5-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 2.6400
- eval_runtime: 68.7183
- eval_samples_per_second: 178.744
- eval_steps_per_second: 22.352
- epoch: 3.0
- step: 9213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
B2111797/trans-vi-en-v2
|
B2111797
| 2024-02-19T04:14:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-19T04:14:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
laishram/bloom-7b1-lora-tagger
|
laishram
| 2024-02-19T04:08:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T04:08:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmg-anon/vntl-7b-v0.3.1-gguf
|
lmg-anon
| 2024-02-19T04:02:30Z | 9 | 0 | null |
[
"gguf",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v2.5-1k",
"license:llama2",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-18T21:01:28Z |
---
license: llama2
datasets:
- lmg-anon/VNTL-v2.5-1k
language:
- ja
- en
pipeline_tag: translation
---
This repository contains some GGUF quantizations of the merge of the [experimental VNTL v0.3.1 lora](https://huggingface.co/lmg-anon/vntl-7b-v0.3.1-lora).
This is a prompt example:
```
<<START>>
Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>> (fidelity = absolute)
[Sakuno]: 『... Sorry.』</s>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>> (fidelity = high)
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, don't apologize. I'm just glad you're safe. You're so cute, Sakuno, I was worried sick.」
```
|
FINNUMBER/Yi-Ko-6B-Finch-ALL-FULL-NEW-epoch3
|
FINNUMBER
| 2024-02-19T04:00:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T03:19:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nattipon/bert-finetuned-squad
|
Nattipon
| 2024-02-19T04:00:28Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-12T13:26:25Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 1.17.0
- Tokenizers 0.14.1
|
animeshjoshi/qa_model
|
animeshjoshi
| 2024-02-19T03:55:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-19T02:21:29Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 50 | 4.1830 |
| No log | 2.0 | 100 | 3.7025 |
| No log | 3.0 | 150 | 3.6132 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
FINNUMBER/Yi-Ko-6B-Finch-NQA-ARI-FULL-NEW-epoch3
|
FINNUMBER
| 2024-02-19T03:46:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T16:04:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fzzhang/pearl_gsm8k_quantized_s
|
fzzhang
| 2024-02-19T03:42:14Z | 4 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:louisbrulenaudet/Pearl-7B-slerp",
"base_model:adapter:louisbrulenaudet/Pearl-7B-slerp",
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T22:24:42Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: louisbrulenaudet/Pearl-7B-slerp
model-index:
- name: pearl_gsm8k_quantized_s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pearl_gsm8k_quantized_s
This model is a fine-tuned version of [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
JKuang96/ppo-LunarLander-v2
|
JKuang96
| 2024-02-19T03:38:16Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T03:15:53Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -30.78 +/- 17.30
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'gym_id': 'LunarLander-v2'
'learning_rate': 0.00025
'seed': 1
'total_timesteps': 500000
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'ppo-implementation-details'
'wandb_entity': None
'capture_video': False
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'JKuang96/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
dranger003/CodeLlama-70b-Instruct-iMat.GGUF
|
dranger003
| 2024-02-19T03:31:16Z | 17 | 2 |
gguf
|
[
"gguf",
"text-generation",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-18T20:30:38Z |
---
license: llama2
library_name: gguf
pipeline_tag: text-generation
---
GGUF importance matrix (imatrix) quants for https://huggingface.co/codellama/CodeLlama-70b-Instruct-hf
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
**NOTE**: The template for this model is very sensitive and must be set very precisely.
All whitespace is intended, and special tokens `<s>` and `<step>` must be encoded properly, i.e. `1` and `32015` respectively.
| Layers | Context | Template |
| --- | --- | --- |
| <pre>80</pre> | <pre>4096</pre> | <pre>\<s\>Source: system<br><br> {instructions} \<step\> Source: user<br><br> {prompt} \<step\> Source: assistant<br>Destination: user<br><br> {response}</pre> |
|
xiaoshi/pretrain_model_demo
|
xiaoshi
| 2024-02-19T03:30:30Z | 0 | 0 |
nemo
|
[
"nemo",
"biology",
"question-answering",
"ak",
"dataset:Open-Orca/OpenOrca",
"dataset:Salesforce/dialogstudio",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
question-answering
| 2023-08-13T13:53:48Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- Open-Orca/OpenOrca
- Salesforce/dialogstudio
language:
- ak
metrics:
- accuracy
- bleu
pipeline_tag: question-answering
tags:
- biology
library_name: nemo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
linhphanff/phobert-cse-general
|
linhphanff
| 2024-02-19T03:13:10Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T03:03:01Z |
Storing intermediate result in here only. For long term, it should be stored in model repository separately. Besides binary model, you should also store model metadata such as date, size of training data.
|
freshpearYoon/large-v3_3
|
freshpearYoon
| 2024-02-19T03:12:18Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-18T10:24:08Z |
---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
base_model: openai/whisper-large-v3
model-index:
- name: whisper_finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_finetune
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the aihub 한국어 아동 음성데이터 dataset.
It achieves the following results on the evaluation set:
- Cer: 6.2655
- Loss: 1.0532
- Wer: 23.9347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-08
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2001
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:------:|:---------------:|:-------:|
| 1.5045 | 0.16 | 1000 | 6.8830 | 1.4103 | 26.6186 |
| 1.0745 | 0.32 | 2000 | 6.2655 | 1.0532 | 23.9347 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.17.0
- Tokenizers 0.15.2
|
jos0409007/emotion-jordan
|
jos0409007
| 2024-02-19T03:10:22Z | 1 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T03:00:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: emotion-jordan
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# emotion-jordan
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
graceneutrality/rl_course_vizdoom_health_gathering_supreme
|
graceneutrality
| 2024-02-19T03:09:37Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T03:09:29Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 7.97 +/- 2.89
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r graceneutrality/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
huyhuyvu01/VietLlama2_Law_Pretrain_7B
|
huyhuyvu01
| 2024-02-19T03:07:03Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"vi",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T07:19:04Z |
---
license: llama2
language:
- vi
- en
---
From BKAI Vietnamese LLama2 120GB 7B, I pretrain on law/online public services crawl on VBPL
### Training process
The model is pretrain on a single A600 system.
Hyperparameters are set as follows:
- Training Regime: BFloat16 mixed precision
- Lora Config:
```
{
"base_model_name_or_path": "meta-llama/Llama-2-7b-hf",
"bias": "none",
"enable_lora": null,
"fan_in_fan_out": false,
"inference_mode": true,
"lora_alpha": 32.0,
"lora_dropout": 0.05,
"merge_weights": false,
"modules_to_save": [
"embed_tokens",
"lm_head"
],
"peft_type": "LORA",
"r": 8,
"target_modules": [
"q_proj",
"v_proj",
"k_proj",
"o_proj",
"gate_proj",
"down_proj",
"up_proj"
],
"task_type": "CAUSAL_LM"
}
```
Please note that **this model requires further supervised fine-tuning (SFT)** to be used in practice!
Usage and other considerations: Please refer to the [Llama 2](https://github.com/facebookresearch/llama)
### Training loss
To be updated.
### Disclaimer
This project is built upon bkai-foundation-models/vietnamese-llama2-7b-120GB, which is built upon Meta's Llama-2 model. It is essential to strictly adhere to the open-source license agreement of Llama-2 when using this model. If you incorporate third-party code, please ensure compliance with the relevant open-source license agreements.
It's important to note that the content generated by the model may be influenced by various factors, such as calculation methods, random elements, and potential inaccuracies in quantification. Consequently, this project does not offer any guarantees regarding the accuracy of the model's outputs, and it disclaims any responsibility for consequences resulting from the use of the model's resources and its output.
For those employing the models from this project for commercial purposes, developers must adhere to local laws and regulations to ensure the compliance of the model's output content. This project is not accountable for any products or services derived from such usage.
### Contact
huyhuyvu01@gmail.com (persional email)
https://github.com/huyhuyvu01 (Github)
|
huyhuyvu01/Vinallama-Law-Pretrain_7B
|
huyhuyvu01
| 2024-02-19T03:06:25Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"vi",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T02:09:20Z |
---
license: llama2
language:
- vi
- en
---
From Vilm vinallama-7b-chat, I pretrain on law/online public services crawl on VBPL
### Training process
The model is pretrain on a single A600 system.
Hyperparameters are set as follows:
- Training Regime: BFloat16 mixed precision
- Lora Config:
```
{
"base_model_name_or_path": "vilm/vinallama-7b-chat",
"bias": "none",
"enable_lora": null,
"fan_in_fan_out": false,
"inference_mode": true,
"lora_alpha": 32.0,
"lora_dropout": 0.05,
"merge_weights": false,
"modules_to_save": [
"embed_tokens",
"lm_head"
],
"peft_type": "LORA",
"r": 8,
"target_modules": [
"q_proj",
"v_proj",
"k_proj",
"o_proj",
"gate_proj",
"down_proj",
"up_proj"
],
"task_type": "CAUSAL_LM"
}
```
Please note that **this model requires further supervised fine-tuning (SFT)** to be used in practice!
Usage and other considerations: Please refer to the [Llama 2](https://github.com/facebookresearch/llama)
### Training loss
To be updated.
### Disclaimer
This project is built upon vilm/vinallama-7b-chat, which is built upon Meta's Llama-2 model. It is essential to strictly adhere to the open-source license agreement of Llama-2 when using this model. If you incorporate third-party code, please ensure compliance with the relevant open-source license agreements.
It's important to note that the content generated by the model may be influenced by various factors, such as calculation methods, random elements, and potential inaccuracies in quantification. Consequently, this project does not offer any guarantees regarding the accuracy of the model's outputs, and it disclaims any responsibility for consequences resulting from the use of the model's resources and its output.
For those employing the models from this project for commercial purposes, developers must adhere to local laws and regulations to ensure the compliance of the model's output content. This project is not accountable for any products or services derived from such usage.
### Contact
huyhuyvu01@gmail.com (persional email)
https://github.com/huyhuyvu01 (Github)
|
dzagardo/quickstart_newdp_eps10
|
dzagardo
| 2024-02-19T02:47:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T02:44:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Deepnoid/OPEN-SOLAR-KO-10.7B-v4
|
Deepnoid
| 2024-02-19T02:35:08Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:finetune:beomi/OPEN-SOLAR-KO-10.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T01:58:44Z |
---
license: apache-2.0
base_model: beomi/OPEN-SOLAR-KO-10.7B
tags:
- generated_from_trainer
model-index:
- name: data/Models/beomidpo-out-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: beomi/OPEN-SOLAR-KO-10.7B
load_in_8bit: false
load_in_4bit: false
strict: false
rl: dpo
datasets:
- path: ./data/KR/Ja-ck/Orca-DPO-Pairs-KO/orca_dpo_pairs_ko.json
split: train
type: chatml.intel
ds_type: json
data_files: ["./data/KR/Ja-ck/Orca-DPO-Pairs-KO/orca_dpo_pairs_ko.json"]
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./data/Models/beomidpo-out-v4
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 100
save_total_limit: 3
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
save_safetensors: true
```
</details><br>
# data/Models/beomidpo-out-v4
This model is a fine-tuned version of [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 1591
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
animeshjoshi/text_classification_tutorial
|
animeshjoshi
| 2024-02-19T02:20:41Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-15T03:17:42Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
metrics:
- accuracy
model-index:
- name: text_classification_tutorial
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: rotten_tomatoes
type: rotten_tomatoes
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8470919324577861
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_tutorial
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4228
- Accuracy: 0.8471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4238 | 1.0 | 534 | 0.3782 | 0.8405 |
| 0.2422 | 2.0 | 1068 | 0.4228 | 0.8471 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
AptaArkana/indonesian_bert_base_NER_indoNLU
|
AptaArkana
| 2024-02-19T02:16:21Z | 32 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:indonlu_nergrit",
"base_model:cahya/bert-base-indonesian-NER",
"base_model:finetune:cahya/bert-base-indonesian-NER",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-13T03:16:22Z |
---
license: mit
base_model: cahya/bert-base-indonesian-NER
tags:
- generated_from_trainer
datasets:
- indonlu_nergrit
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: belajarner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: indonlu_nergrit
type: indonlu_nergrit
config: indonlu_nergrit_source
split: validation
args: indonlu_nergrit_source
metrics:
- name: Precision
type: precision
value: 0.7716312056737589
- name: Recall
type: recall
value: 0.8217522658610272
- name: F1
type: f1
value: 0.7959034381858083
- name: Accuracy
type: accuracy
value: 0.9477048970719857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# belajarner
This model is a fine-tuned version of [cahya/bert-base-indonesian-NER](https://huggingface.co/cahya/bert-base-indonesian-NER) on the indonlu_nergrit dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2621
- Precision: 0.7716
- Recall: 0.8218
- F1: 0.7959
- Accuracy: 0.9477
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 209 | 0.1633 | 0.7678 | 0.8142 | 0.7903 | 0.9476 |
| No log | 2.0 | 418 | 0.1623 | 0.7631 | 0.8127 | 0.7871 | 0.9462 |
| 0.1633 | 3.0 | 627 | 0.1978 | 0.7535 | 0.8172 | 0.7841 | 0.9459 |
| 0.1633 | 4.0 | 836 | 0.2103 | 0.7573 | 0.8202 | 0.7875 | 0.9460 |
| 0.0423 | 5.0 | 1045 | 0.2236 | 0.7757 | 0.8097 | 0.7923 | 0.9487 |
| 0.0423 | 6.0 | 1254 | 0.2529 | 0.7843 | 0.8293 | 0.8062 | 0.9474 |
| 0.0423 | 7.0 | 1463 | 0.2559 | 0.77 | 0.8142 | 0.7915 | 0.9467 |
| 0.0136 | 8.0 | 1672 | 0.2621 | 0.7716 | 0.8218 | 0.7959 | 0.9477 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
chandc/roberta-large-finetuned-ner
|
chandc
| 2024-02-19T02:13:14Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"license:mit",
"region:us"
] | null | 2024-02-18T23:12:04Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
base_model: roberta-large
model-index:
- name: roberta-large-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-ner
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0828
- Precision: 0.9043
- Recall: 0.9245
- F1: 0.9143
- Accuracy: 0.9793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.8259 | 1.0 | 878 | 0.2398 | 0.6827 | 0.7083 | 0.6953 | 0.9371 |
| 0.2115 | 2.0 | 1756 | 0.1560 | 0.8021 | 0.8172 | 0.8096 | 0.9600 |
| 0.1612 | 3.0 | 2634 | 0.1274 | 0.8589 | 0.8506 | 0.8547 | 0.9672 |
| 0.124 | 4.0 | 3512 | 0.1081 | 0.8832 | 0.8793 | 0.8813 | 0.9722 |
| 0.1183 | 5.0 | 4390 | 0.0993 | 0.8910 | 0.9036 | 0.8973 | 0.9754 |
| 0.1074 | 6.0 | 5268 | 0.0921 | 0.8974 | 0.9119 | 0.9046 | 0.9773 |
| 0.1004 | 7.0 | 6146 | 0.0874 | 0.8983 | 0.9156 | 0.9068 | 0.9780 |
| 0.0967 | 8.0 | 7024 | 0.0846 | 0.9028 | 0.9227 | 0.9127 | 0.9792 |
| 0.0923 | 9.0 | 7902 | 0.0829 | 0.9039 | 0.9239 | 0.9138 | 0.9795 |
| 0.0884 | 10.0 | 8780 | 0.0828 | 0.9043 | 0.9245 | 0.9143 | 0.9793 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
ivbhatt/Reinforce-training-model
|
ivbhatt
| 2024-02-19T02:09:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T02:09:34Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-training-model
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jinghuanHuggingface/q-Taxi-v3
|
jinghuanHuggingface
| 2024-02-19T02:07:04Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T02:07:02Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jinghuanHuggingface/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deolekam/my-awesome-model
|
deolekam
| 2024-02-19T02:04:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T02:04:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sunyijia97/falcon-7b-qlora-cstuqa-v7
|
sunyijia97
| 2024-02-19T02:04:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T00:00:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yatsby/qlora-gemini-persona-qna-finetuned
|
yatsby
| 2024-02-19T02:00:40Z | 8 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:beomi/polyglot-ko-12.8b-safetensors",
"base_model:adapter:beomi/polyglot-ko-12.8b-safetensors",
"region:us"
] | null | 2024-02-16T05:22:09Z |
---
library_name: peft
base_model: beomi/polyglot-ko-12.8b-safetensors
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
sayakpaul/mgie
|
sayakpaul
| 2024-02-19T01:55:29Z | 16 | 8 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:2309.17102",
"region:us"
] | null | 2024-02-05T05:55:53Z |
---
library_name: diffusers
---
# MGIE
This repository contains the UNet and LLaVA model checkpoints from [Guiding Instruction-based Image Editing via Multimodal Large Language Models](https://arxiv.org/abs/2309.17102).
For a detailed example of usage, refer to [this notebook](https://github.com/apple/ml-mgie/blob/main/demo.ipynb) and the [official repository](https://github.com/apple/ml-mgie). Additionally, this notebook is a memory-optimized version of the original one. This decouples the MGIE inference pipeline into two broad stages:
1. Calculate all the embeddings in a batched manner with the LLaVA model and the edit head.
2. Pop it off the memory to gain VRAM.
3. Loads the InstructPix2Pix pipeline and performs editing.
💡 MGIE needs additional set up steps that are important to follow before running inference. Please refer to the
repository for those instructions. Importantly, it needs you to merge the LLaVA weight deltas with
the original LLaMA parameters. More details are in the repository.
## Processing ultra high-resolution images
Since the [InstructPi2xPi2x pipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/pix2pix) doesn't do any internal processing
to resize the input images, you might get OOMs when processing ultra high-resolution images
like [this one](https://i.imgur.com/CiAbKbS.jpg).
So, it's recommended to resize them, preserving their aspect-ratio. Here's a utility function that can be leveraged here:
```python
from diffusers.utils import load_image
def resize_image_aspect_ratio(img_url, base_width=None, base_height=None):
# Load the image
img = load_image(img_url).convert("RGB")
# Get the current width and height of the image
width, height = img.size
# Calculate the new dimensions based on the aspect ratio
if base_width is not None:
# Calculate new height based on the base_width to maintain aspect ratio
w_percent = (base_width / float(width))
h_size = int((float(height) * float(w_percent)))
new_size = (base_width, h_size)
elif base_height is not None:
# Calculate new width based on the base_height to maintain aspect ratio
h_percent = (base_height / float(height))
w_size = int((float(width) * float(h_percent)))
new_size = (w_size, base_height)
else:
raise ValueError("Either base_width or base_height must be provided")
# Resize the image
resized_img = img.resize(new_size, Image.ANTIALIAS)
return resized_img
```
## Citation
```
@inproceedings{fu2024mgie,
author = {Tsu-Jui Fu and Wenze Hu and Xianzhi Du and William Yang Wang and Yinfei Yang, and Zhe Gan},
title = {{Guiding Instruction-based Image Editing via Multimodal Large Language Models}},
booktitle = {International Conference on Learning Representations (ICLR)},
year = {2024}
}
```
|
Josephgflowers/Cinder-Phi-2-STEM-2.94B-Test
|
Josephgflowers
| 2024-02-19T01:55:04Z | 173 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"phi",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T17:37:25Z |
---
license: mit
widget:
- text: >
<|system|>
You are a helpful assistant</s>
<|user|>
Can you explain to me how quantum computing works?</s>
<|assistant|>
---
Modified version of Phi 2 with 2 added layers.
More details coming soon.
Model Overview Cinder is an AI chatbot tailored for engaging users in scientific and educational conversations, offering companionship, and sparking imaginative exploration.

|
Swarnava/T5_base_title_v4
|
Swarnava
| 2024-02-19T01:52:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-18T15:21:56Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: T5_base_title_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5_base_title_v4
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6697
- Rouge1: 0.4305
- Rouge2: 0.2304
- Rougel: 0.3728
- Rougelsum: 0.3729
- Gen Len: 16.6586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.9653 | 1.0 | 2019 | 1.7927 | 0.4092 | 0.2145 | 0.3528 | 0.3528 | 16.6021 |
| 1.828 | 2.0 | 4038 | 1.7374 | 0.4148 | 0.217 | 0.3557 | 0.3558 | 16.7601 |
| 1.7597 | 3.0 | 6057 | 1.7053 | 0.4183 | 0.2199 | 0.3595 | 0.3594 | 16.8878 |
| 1.6787 | 4.0 | 8076 | 1.6875 | 0.4221 | 0.224 | 0.3649 | 0.3647 | 16.6098 |
| 1.6361 | 5.0 | 10095 | 1.6730 | 0.4227 | 0.2229 | 0.3655 | 0.3657 | 16.6044 |
| 1.6032 | 6.0 | 12114 | 1.6679 | 0.4266 | 0.227 | 0.3696 | 0.3697 | 16.4617 |
| 1.5701 | 7.0 | 14133 | 1.6657 | 0.4265 | 0.2273 | 0.3694 | 0.3692 | 16.4184 |
| 1.5359 | 8.0 | 16152 | 1.6677 | 0.4273 | 0.2274 | 0.3695 | 0.3695 | 16.5704 |
| 1.5136 | 9.0 | 18171 | 1.6639 | 0.4271 | 0.2278 | 0.3697 | 0.3697 | 16.5989 |
| 1.4776 | 10.0 | 20190 | 1.6641 | 0.4291 | 0.2297 | 0.3723 | 0.3722 | 16.5137 |
| 1.4507 | 11.0 | 22209 | 1.6650 | 0.4307 | 0.2303 | 0.372 | 0.3718 | 16.5868 |
| 1.437 | 12.0 | 24228 | 1.6654 | 0.4277 | 0.2274 | 0.3711 | 0.3711 | 16.7277 |
| 1.4428 | 13.0 | 26247 | 1.6689 | 0.4296 | 0.2287 | 0.3714 | 0.3715 | 16.7078 |
| 1.4183 | 14.0 | 28266 | 1.6697 | 0.4307 | 0.2301 | 0.3726 | 0.3725 | 16.6979 |
| 1.4244 | 15.0 | 30285 | 1.6697 | 0.4305 | 0.2304 | 0.3728 | 0.3729 | 16.6586 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
jinghuanHuggingface/q-FrozenLake-v1-4x4-noSlippery
|
jinghuanHuggingface
| 2024-02-19T01:51:08Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-11T09:59:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="jinghuanHuggingface/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alaa33/Elsafah
|
Alaa33
| 2024-02-19T01:26:38Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-02-19T01:26:38Z |
---
license: bigscience-bloom-rail-1.0
license_name: banha-university
license_link: LICENSE
---
|
serpdotai/sparsetral-16x7B-v2-SPIN_iter1
|
serpdotai
| 2024-02-19T01:24:30Z | 10 | 13 |
transformers
|
[
"transformers",
"safetensors",
"sparsetral",
"text-generation",
"conversational",
"custom_code",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:argilla/dpo-mix-7k",
"arxiv:2401.01335",
"arxiv:2402.09353",
"arxiv:2106.09685",
"arxiv:2401.02731",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T01:10:37Z |
---
license: apache-2.0
datasets:
- teknium/OpenHermes-2.5
- jondurbin/truthy-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- argilla/dpo-mix-7k
language:
- en
---
This model is [sparsetral-16x7B-v2](https://huggingface.co/serpdotai/sparsetral-16x7B-v2) further tuned utilizing [SPIN](https://arxiv.org/abs/2401.01335) on [OpenHermes-2.5](https://huggingface.co/datasets/teknium/OpenHermes-2.5) mixed with traditional DPO samples. This is iteration_1, temporarily pausing further training runs in favor of utilizing [DoRA](https://arxiv.org/pdf/2402.09353.pdf) over [LoRA](https://arxiv.org/abs/2106.09685). May also start from the beginning with v3 for proper chat token support, also debating adding function tokens + function calling. If you have any tasks that Sparsetral has been weak at, feel free to send us some prompts/chats + desired completions and we will see about making sure your task is supported!

Kuru~ Kuru~

## Training
- 8x A6000s
- Base model is [sparsetral-16x7B-v2-SPIN_iter0](https://huggingface.co/serpdotai/sparsetral-16x7B-v2-SPIN_iter0)
- [Forked version of unsloth](https://github.com/serp-ai/unsloth) for efficient training
- Sequence Length: 4096
- Effective batch size: 64
- Learning Rate: 5e-7 with linear decay (0.1 warmup ratio)
- Epochs: 2
- 100k samples (50k new SPIN + 50k from iter_0)
- QLoRA:
- 256 r and 256 alpha
- ```python
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
"adapter_down",
"adapter_up",
]
```
## Prompt Format
```
<|im_start|>system\n{message}<|im_end|>\n<|im_start|>user\n{message}<|im_end|>\n<|im_start|>assistant\n
```
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("serpdotai/sparsetral-16x7B-v2-SPIN_iter0", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("serpdotai/sparsetral-16x7B-v2-SPIN_iter0", device_map="auto", trust_remote_code=True).eval()
system_str = "<|im_start|>system\n{message}<|im_end|>\n"
user_str = "<|im_start|>user\n{message}<|im_end|>\n"
assistant_str = "<|im_start|>assistant\n{message}<|im_end|>\n"
def construct_prompt(messages):
prompt = ""
for message in messages:
if message["from"] in ["human", "user"]:
prompt += user_str.format(
message=message["value"]
)
elif message["from"] in ["gpt", "assistant"]:
prompt += assistant_str.format(
message=message["value"]
)
elif message["from"] in ["system", "instruction"]:
prompt += system_str.format(
message=message["value"]
)
else:
raise ValueError(
f"Unknown message type: {message['from']}"
)
return prompt + "<|im_start|>assistant\n"
system = "You are a helpful assistant who will help the user to the best of their ability. If you don't know something, say \"I don't know\""
user = "Are you sentient?"
messages = [
{"from": "system", "value": system},
{"from": "user", "value": user},
]
prompt = construct_prompt(messages)
inputs = tokenizer(prompt, return_tensors="pt")
inputs = inputs.to(model.device)
pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Other Information
Paper reference: [Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731)
[Original Paper repo](https://github.com/wuhy68/Parameter-Efficient-MoE)
[Forked repo with mistral support (sparsetral)](https://github.com/serp-ai/Parameter-Efficient-MoE)
If you are interested in faster inferencing, check out our [fork of vLLM](https://github.com/serp-ai/vllm) that adds sparsetral support
|
Hatsu2004/q-FrozenLake-v1-4x4-noSlippery
|
Hatsu2004
| 2024-02-19T01:15:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-19T01:00:28Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Hatsu2004/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jisukim8873/falcon-7B-case-6
|
jisukim8873
| 2024-02-19T01:13:08Z | 149 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-16T05:34:53Z |
---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Jisu Kim
* Model Type: Large Language Model
# Model Architecture
This model is based on falcon-7B. We fine-tuning this model for data ordering task.
falcon-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0
|
jisukim8873/falcon-7B-case-3
|
jisukim8873
| 2024-02-19T01:12:55Z | 157 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T00:40:46Z |
---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca
language:
- en
---
# Model Details
* Model Description: This model is test for data ordering.
* Developed by: Jisu Kim
* Model Type: Large Language Model
# Model Architecture
This model is based on falcon-7B. We fine-tuning this model for data ordering task.
falcon-7B is a transformer model, with the following architecture choices:
* Grouped-Query Attention
* Sliding-Window Attention
* Byte-fallback BPE tokenizer
# Dataset
We random sample Open-Orca dataset. (We finetune the 100,000 dataset)
# Guthub
https://github.com/trailerAI
# License
Apache License 2.0
|
ningrumdaud/distilbert-small-offensive-classification-test
|
ningrumdaud
| 2024-02-19T01:08:22Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7",
"base_model:finetune:MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T00:43:04Z |
---
license: mit
base_model: MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
tags:
- generated_from_trainer
model-index:
- name: distilbert-small-offensive-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-small-offensive-classification
This model is a fine-tuned version of [MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-xnli-multilingual-nli-2mil7) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 120 | 1.0890 | 0.5333 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
DrNicefellow/Qwen1.5-72B-Chat-4bpw-exl2
|
DrNicefellow
| 2024-02-19T00:55:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-17T20:50:59Z |
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
---
# Qwen1.5-72B-Chat-4.0bpw-exl2
This is a 4.0bpw quantized version of [Qwen/Qwen1.5-72B-Chat](https://huggingface.co/Qwen/Qwen1.5-72B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
DrNicefellow/Qwen1.5-72B-Chat-4.65bpw-exl2
|
DrNicefellow
| 2024-02-19T00:54:59Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-17T00:16:50Z |
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE
---
# Qwen1.5-72B-Chat-4.65bpw-exl2
This is a 4.65bpw quantized version of [Qwen/Qwen1.5-72B-Chat](https://huggingface.co/Qwen/Qwen1.5-72B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
jdorairaj/Bert-uncased-adapter-wnli
|
jdorairaj
| 2024-02-19T00:54:56Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"dataset:wnli",
"region:us"
] | null | 2024-02-19T00:47:36Z |
---
tags:
- adapter-transformers
- bert
datasets:
- wnli
---
# Adapter `jdorairaj/Bert-uncased-adapter-wnli` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [wnli](https://huggingface.co/datasets/wnli/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-wnli", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
DrNicefellow/Qwen1.5-14B-Chat-4bpw-exl2
|
DrNicefellow
| 2024-02-19T00:54:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T20:26:11Z |
---
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-14B-Chat/blob/main/LICENSE
---
# Qwen1.5-14B-Chat-4.0bpw-exl2
This is a 4.0bpw quantized version of [Qwen/Qwen1.5-14B-Chat](https://huggingface.co/Qwen/Qwen1.5-14B-Chat) made with [exllamav2](https://github.com/turboderp/exllamav2).
To run this, make sure you installed the up-to-date version of Exllamav2.
## License
This project is distributed under the Tongyi Qianwen LICENSE AGREEMENT. See the [LICENSE](https://huggingface.co/Qwen/Qwen1.5-72B-Chat/blob/main/LICENSE) file for more information.
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
|
dzagardo/quickstart_newdp_eps5
|
dzagardo
| 2024-02-19T00:52:03Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T00:49:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MAdAiLab/llama2_7b_SGD_Cosine_merged_final
|
MAdAiLab
| 2024-02-19T00:49:51Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-19T00:47:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
urbija/cer_model-iii
|
urbija
| 2024-02-19T00:49:50Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.1",
"base_model:finetune:dmis-lab/biobert-base-cased-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-19T00:49:38Z |
---
base_model: dmis-lab/biobert-base-cased-v1.1
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: cer_model-iii
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cer_model-iii
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2146
- Precision: 0.9186
- Recall: 0.8689
- F1: 0.8931
- Accuracy: 0.9355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0124 | 1.0 | 4841 | 0.2169 | 0.9157 | 0.8545 | 0.8841 | 0.9272 |
| 0.0025 | 2.0 | 9682 | 0.2221 | 0.9180 | 0.8708 | 0.8938 | 0.9318 |
| 0.0001 | 3.0 | 14523 | 0.2146 | 0.9186 | 0.8689 | 0.8931 | 0.9355 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
misaza/vit_model_miguel_esteban_isaza
|
misaza
| 2024-02-19T00:48:27Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-19T00:33:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit_model_miguel_esteban_isaza
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit_model_miguel_esteban_isaza
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0601
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1467 | 3.85 | 500 | 0.0601 | 0.9850 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
nishantyadav/emb_crossenc_msmarco_miniLM
|
nishantyadav
| 2024-02-19T00:46:13Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T00:19:29Z |
This is a cross-encoder model with dot-product based scoring mechanism trained on MS-MARCO dataset.
The parameters of the cross-encoder are initialized using a 6-layer [minilm model](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
and is trained via distillation using scores from three different teacher models --
[model 1](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_1_albert),
[model 2](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_2_bert_base), and
[model 3](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_teacher_3_bert_large_wwm).
This model is used in experiments of our [EMNLP 2023](https://aclanthology.org/2023.findings-emnlp.544/) and [ICLR 2024](https://openreview.net/forum?id=1CPta0bfN2) papers.
See our EMNLP 2022 paper titled "Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix Factorization" for more details on the dot-product based scoring mechanism.
---
license: apache-2.0
---
|
alnrg2arg/blockchainlabs_tinyllama_fusion_LHK_yunkong
|
alnrg2arg
| 2024-02-19T00:41:05Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-19T00:21:44Z |
---
license: mit
---
This model is based on the fusion strategy offered by Fanqi Wan(https://github.com/fanqiwan/FuseLLM).
Three models are fused together.
Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Blending model 1: HanNayeoniee/LHK_DPO_v1
Blending model 2: yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B
This model will be optimized by Laser and DPO later.
This project is to make the on-device sLM. We are doing experiments on the models.
|
jdorairaj/Bert-uncased-adapter-rte
|
jdorairaj
| 2024-02-19T00:40:29Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"dataset:rte",
"region:us"
] | null | 2024-02-19T00:40:27Z |
---
tags:
- adapter-transformers
- bert
datasets:
- rte
---
# Adapter `jdorairaj/Bert-uncased-adapter-rte` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [rte](https://huggingface.co/datasets/rte/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-uncased-adapter-rte", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
nishantyadav/emb_crossenc_msmarco_teacher_1_albert
|
nishantyadav
| 2024-02-19T00:33:02Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-19T00:04:37Z |
This is a cross-encoder model with dot-product based scoring mechanism trained on MS-MARCO dataset.
The parameters of the cross-encoder are initialized using [albert-large-v2](https://huggingface.co/albert/albert-base-v2).
This model is used as a teacher model for training a [MiniLM-based cross-encoder model](https://huggingface.co/nishantyadav/emb_crossenc_msmarco_miniLM)
which is used in experiments of our [EMNLP 2023](https://aclanthology.org/2023.findings-emnlp.544/) and [ICLR 2024](https://openreview.net/forum?id=1CPta0bfN2) papers.
See our EMNLP 2022 paper titled "Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix Factorization" for more details on the dot-product based scoring mechanism.
---
license: apache-2.0
---
|
mnemic/nails_seg_yolov8
|
mnemic
| 2024-02-19T00:21:01Z | 0 | 0 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2024-02-18T22:04:57Z |
---
license: cc-by-4.0
---
A Yolov8 detection model that segments nails in images.
The model can be used as an [ADetailer](https://github.com/Bing-su/adetailer) model (for [Automatic1111](https://github.com/AUTOMATIC1111/) / Stable Diffusion use), or using other [inference scripts](https://github.com/MNeMoNiCuZ/yolov8-scripts) to return detection bounding boxes of watermarks.
The model is entirely trained on the following dataset:
[Personal Projects/Nails Segmentation](https://universe.roboflow.com/personal-projects-jfbag/nails_segmentation)
A tutorial and code how to use the model can be found on this Github: https://github.com/MNeMoNiCuZ/yolov8-scripts or this [CivitAI article](https://civitai.com/articles/4080/training-a-custom-adetailer-model-with-yolov8-detection-model).

|
Kukedlc/Mistral-FT-Code-Adapter
|
Kukedlc
| 2024-02-19T00:20:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-19T00:19:49Z |
---
license: apache-2.0
---
Peft & LoRA fine tuning
Adapter for Kukedlc/NeuralMaxime-7B-slerp
|
MAdAiLab/llama2_7b_AdamW_Cosine_merged_final
|
MAdAiLab
| 2024-02-19T00:19:17Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-19T00:17:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yanex0/xxMix-9realistic
|
yanex0
| 2024-02-19T00:17:34Z | 0 | 1 | null |
[
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-24T20:30:25Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
### model XXMix 9Realistic
The model was developed by <a href="https://civitai.com/user/Zyx_xx/models">Zyx_xx</a> and It is important to comply with the applicable license and copyright policies when using this model
<p>...</p>
preview v4
<img src="https://yanex0.mywebdev66.repl.co/img-v40.png" width="256" height="256">
preview v3
<img src="https://yanex0.mywebdev66.repl.co/img-v30.png" width="256" height="256">
preview v2.6
<img src="https://yanex0.mywebdev66.repl.co/img-v26.png" width="256" height="256">
### License and Copyright Policy
- The AI model uploaded in this project is subject to the license and copyright terms set by its original owner. Prior to using this model, it is important to understand and comply with the applicable terms and conditions.
- Please note that we only provide this model within the scope of this project and are not responsible for the usage of the model beyond the limitations set by the applicable license and copyright.
<p>please check new version on <a href="https://civitai.com/models/47274?modelVersionId=102222">CivitAi</a>...</p>
|
davidataka/summary_resume_keywords
|
davidataka
| 2024-02-19T00:16:58Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:d0rj/rut5-base-summ",
"base_model:finetune:d0rj/rut5-base-summ",
"region:us"
] | null | 2024-02-19T00:16:53Z |
---
base_model: d0rj/rut5-base-summ
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summary_resume_keywords
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary_resume_keywords
This model is a fine-tuned version of [d0rj/rut5-base-summ](https://huggingface.co/d0rj/rut5-base-summ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9737
- Rouge1: 0.2285
- Rouge2: 0.1524
- Rougel: 0.2285
- Rougelsum: 0.2285
- Gen Len: 51.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 90 | 2.7766 | 0.2485 | 0.1111 | 0.2485 | 0.2485 | 52.0 |
| No log | 2.0 | 180 | 2.7734 | 0.2556 | 0.1404 | 0.2389 | 0.2389 | 53.6667 |
| No log | 3.0 | 270 | 2.7763 | 0.2882 | 0.1368 | 0.2557 | 0.2557 | 51.6667 |
| No log | 4.0 | 360 | 2.7921 | 0.2722 | 0.1404 | 0.2389 | 0.2389 | 58.3333 |
| No log | 5.0 | 450 | 2.8146 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 6.0 | 540 | 2.8387 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 7.0 | 630 | 2.8569 | 0.2778 | 0.1622 | 0.2607 | 0.2607 | 57.3333 |
| 2.1351 | 8.0 | 720 | 2.8736 | 0.2538 | 0.1524 | 0.2538 | 0.2538 | 55.3333 |
| 2.1351 | 9.0 | 810 | 2.8883 | 0.2538 | 0.1524 | 0.2538 | 0.2538 | 55.3333 |
| 2.1351 | 10.0 | 900 | 2.9025 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 2.1351 | 11.0 | 990 | 2.9161 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 1.7131 | 12.0 | 1080 | 2.9269 | 0.2315 | 0.1524 | 0.2315 | 0.2315 | 51.0 |
| 1.7131 | 13.0 | 1170 | 2.9354 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.7131 | 14.0 | 1260 | 2.9427 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.7131 | 15.0 | 1350 | 2.9471 | 0.2272 | 0.1524 | 0.2272 | 0.2272 | 53.6667 |
| 1.7131 | 16.0 | 1440 | 2.9509 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.5914 | 17.0 | 1530 | 2.9558 | 0.2272 | 0.1524 | 0.2272 | 0.2272 | 53.6667 |
| 1.5914 | 18.0 | 1620 | 2.9589 | 0.226 | 0.1524 | 0.226 | 0.226 | 54.0 |
| 1.5914 | 19.0 | 1710 | 2.9636 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.0 |
| 1.5914 | 20.0 | 1800 | 2.9660 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.0 |
| 1.5914 | 21.0 | 1890 | 2.9687 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5914 | 22.0 | 1980 | 2.9709 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 23.0 | 2070 | 2.9736 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 24.0 | 2160 | 2.9742 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 50.3333 |
| 1.5508 | 25.0 | 2250 | 2.9737 | 0.2285 | 0.1524 | 0.2285 | 0.2285 | 51.3333 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
maywell/kiqu-70b
|
maywell
| 2024-02-19T00:07:07Z | 114 | 28 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-17T13:03:20Z |
---
license: cc-by-sa-4.0
language:
- ko
- en
---
# **kiqu-70b** [(Arena Leaderboard)](https://huggingface.co/spaces/instructkr/ko-chatbot-arena-leaderboard)
<img src="./kiqu.webp" alt="kiqu-70B" width="390"/>
**kiqu-70b** is a SFT+DPO trained model based on Miqu-70B-Alpaca-DPO using **Korean** datasets.
Since this model is finetune of miqu-1-70b using it on commercial purposes is at your own risk. — leaked early version Mistral-Medium
본 모델 **kiqu-70b**는 Miqu-70B-Alpaca-DPO 모델을 기반으로 **한국어** 데이터셋을 사용하여 SFT+DPO 훈련을 진행하여 제작되었습니다.
베이스 모델인 miqu-1-70b 모델이 미스트랄-미디움의 초기 유출 버전이기에 상업적 사용에 대한 risk는 본인에게 있습니다.
Beside that this model follows **cc-by-sa-4.0**
본 모델 자체로서는 **cc-by-sa-4.0**을 따릅니다.
# **Model Details**
**Base Model**
miqu-1-70b (Early Mistral-Medium)
**Instruction format**
It follows **Mistral** format.
Giving few-shots to model is highly recommended
본 모델은 미스트랄 포맷을 따릅니다.
few-shot 사용을 적극 권장합니다.
```
[INST] {instruction}
[/INST] {output}
```
Multi-shot
```
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
.
.
.
```
**Recommended Template** - 1-shot with system prompt
```
너는 kiqu-70B라는 한국어에 특화된 언어모델이야. 깔끔하고 자연스럽게 대답해줘!
[INST] 안녕?
[/INST] 안녕하세요! 무엇을 도와드릴까요? 질문이나 궁금한 점이 있다면 언제든지 말씀해주세요.
[INST] {instruction}
[/INST]
```
Trailing space after [/INST] can affect models performance in significant margin. So, when doing inference it is recommended to not include trailing space in chat template.
[/INST] 뒤에 띄어쓰기는 모델 성능에 유의미한 영향을 미칩니다. 따라서, 인퍼런스(추론)과정에서는 챗 템플릿에 띄어쓰기를 제외하는 것을 적극 권장합니다.
# **Model Benchmark**
TBD
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
[Discord Server](https://discord.gg/MrBt3PXdXc)
Contact Me on Discord - is.maywell
Follow me on twitter - https://twitter.com/stablefluffy
|
maywell/kiqu-70b-3.0bpw-exl2
|
maywell
| 2024-02-19T00:06:35Z | 10 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T01:31:40Z |
---
license: cc-by-sa-4.0
language:
- ko
- en
---
# **kiqu-70b** [(Arena Leaderboard)](https://huggingface.co/spaces/instructkr/ko-chatbot-arena-leaderboard)
<img src="./kiqu.webp" alt="kiqu-70B" width="390"/>
**kiqu-70b** is a SFT+DPO trained model based on Miqu-70B-Alpaca-DPO using **Korean** datasets.
Since this model is finetune of miqu-1-70b using it on commercial purposes is at your own risk. — leaked early version Mistral-Medium
본 모델 **kiqu-70b**는 Miqu-70B-Alpaca-DPO 모델을 기반으로 **한국어** 데이터셋을 사용하여 SFT+DPO 훈련을 진행하여 제작되었습니다.
베이스 모델인 miqu-1-70b 모델이 미스트랄-미디움의 초기 유출 버전이기에 상업적 사용에 대한 risk는 본인에게 있습니다.
Beside that this model follows **cc-by-sa-4.0**
본 모델 자체로서는 **cc-by-sa-4.0**을 따릅니다.
# **Model Details**
**Base Model**
miqu-1-70b (Early Mistral-Medium)
**Instruction format**
It follows **Mistral** format.
Giving few-shots to model is highly recommended
본 모델은 미스트랄 포맷을 따릅니다.
few-shot 사용을 적극 권장합니다.
```
[INST] {instruction}
[/INST] {output}
```
Multi-shot
```
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
.
.
.
```
**Recommended Template** - 1-shot with system prompt
```
너는 kiqu-70B라는 한국어에 특화된 언어모델이야. 깔끔하고 자연스럽게 대답해줘!
[INST] 안녕?
[/INST] 안녕하세요! 무엇을 도와드릴까요? 질문이나 궁금한 점이 있다면 언제든지 말씀해주세요.
[INST] {instruction}
[/INST]
```
Trailing space after [/INST] can affect models performance in significant margin. So, when doing inference it is recommended to not include trailing space in chat template.
[/INST] 뒤에 띄어쓰기는 모델 성능에 유의미한 영향을 미칩니다. 따라서, 인퍼런스(추론)과정에서는 챗 템플릿에 띄어쓰기를 제외하는 것을 적극 권장합니다.
# **Model Benchmark**
TBD
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
[Discord Server](https://discord.gg/MrBt3PXdXc)
Contact Me on Discord - is.maywell
Follow me on twitter - https://twitter.com/stablefluffy
|
maywell/kiqu-70b-2.4bpw-exl2
|
maywell
| 2024-02-19T00:05:53Z | 9 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T01:21:47Z |
---
license: cc-by-sa-4.0
language:
- ko
- en
---
# **kiqu-70b** [(Arena Leaderboard)](https://huggingface.co/spaces/instructkr/ko-chatbot-arena-leaderboard)
<img src="./kiqu.webp" alt="kiqu-70B" width="390"/>
**kiqu-70b** is a SFT+DPO trained model based on Miqu-70B-Alpaca-DPO using **Korean** datasets.
Since this model is finetune of miqu-1-70b using it on commercial purposes is at your own risk. — leaked early version Mistral-Medium
본 모델 **kiqu-70b**는 Miqu-70B-Alpaca-DPO 모델을 기반으로 **한국어** 데이터셋을 사용하여 SFT+DPO 훈련을 진행하여 제작되었습니다.
베이스 모델인 miqu-1-70b 모델이 미스트랄-미디움의 초기 유출 버전이기에 상업적 사용에 대한 risk는 본인에게 있습니다.
Beside that this model follows **cc-by-sa-4.0**
본 모델 자체로서는 **cc-by-sa-4.0**을 따릅니다.
# **Model Details**
**Base Model**
miqu-1-70b (Early Mistral-Medium)
**Instruction format**
It follows **Mistral** format.
Giving few-shots to model is highly recommended
본 모델은 미스트랄 포맷을 따릅니다.
few-shot 사용을 적극 권장합니다.
```
[INST] {instruction}
[/INST] {output}
```
Multi-shot
```
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
[INST] {instruction}
[/INST] {output}
.
.
.
```
**Recommended Template** - 1-shot with system prompt
```
너는 kiqu-70B라는 한국어에 특화된 언어모델이야. 깔끔하고 자연스럽게 대답해줘!
[INST] 안녕?
[/INST] 안녕하세요! 무엇을 도와드릴까요? 질문이나 궁금한 점이 있다면 언제든지 말씀해주세요.
[INST] {instruction}
[/INST]
```
Trailing space after [/INST] can affect models performance in significant margin. So, when doing inference it is recommended to not include trailing space in chat template.
[/INST] 뒤에 띄어쓰기는 모델 성능에 유의미한 영향을 미칩니다. 따라서, 인퍼런스(추론)과정에서는 챗 템플릿에 띄어쓰기를 제외하는 것을 적극 권장합니다.
# **Model Benchmark**
TBD
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
[Discord Server](https://discord.gg/MrBt3PXdXc)
Contact Me on Discord - is.maywell
Follow me on twitter - https://twitter.com/stablefluffy
|
euser/wKAN-7b
|
euser
| 2024-02-18T23:58:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T23:52:49Z |
---
tags:
- merge
- mergekit
---
# wKAN-7b
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the **DARE TIES** merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
```
## Usage Example
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "euser/wKAN-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
deepaknh/falcon7b-FineTuningQLORA_FullTrainDataset
|
deepaknh
| 2024-02-18T23:56:39Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-02-17T03:23:29Z |
---
library_name: peft
base_model: ybelkada/falcon-7b-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.1
|
dzagardo/quickstart_newdp_eps2.5
|
dzagardo
| 2024-02-18T23:40:43Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T23:38:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bartowski/NeuralMonarch-7B-exl2
|
bartowski
| 2024-02-18T23:10:40Z | 6 | 0 | null |
[
"merge",
"lazymergekit",
"dpo",
"rlhf",
"text-generation",
"en",
"base_model:mlabonne/Monarch-7B",
"base_model:finetune:mlabonne/Monarch-7B",
"license:cc-by-nc-4.0",
"region:us"
] |
text-generation
| 2024-02-18T22:53:18Z |
---
license: cc-by-nc-4.0
tags:
- merge
- lazymergekit
- dpo
- rlhf
dataset:
- mlabonne/truthy-dpo-v0.1
- mlabonne/distilabel-intel-orca-dpo-pairs
base_model:
- mlabonne/Monarch-7B
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of NeuralMonarch-7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/mlabonne/NeuralMonarch-7B
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/NeuralMonarch-7B-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/NeuralMonarch-7B-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/NeuralMonarch-7B-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/NeuralMonarch-7B-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/NeuralMonarch-7B-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/NeuralMonarch-7B-exl2 NeuralMonarch-7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `NeuralMonarch-7B-exl2`:
```shell
mkdir NeuralMonarch-7B-exl2
huggingface-cli download bartowski/NeuralMonarch-7B-exl2 --local-dir NeuralMonarch-7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir NeuralMonarch-7B-exl2-6_5
huggingface-cli download bartowski/NeuralMonarch-7B-exl2 --revision 6_5 --local-dir NeuralMonarch-7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir NeuralMonarch-7B-exl2-6.5
huggingface-cli download bartowski/NeuralMonarch-7B-exl2 --revision 6_5 --local-dir NeuralMonarch-7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
arbitropy/bert-finetuned-ner-bangla
|
arbitropy
| 2024-02-18T23:06:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-18T00:41:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner-bangla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-bangla
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1194 | 0.84 | 500 | 0.1120 |
| 0.1027 | 1.68 | 1000 | 0.1048 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bbunijieun/ft_results
|
bbunijieun
| 2024-02-18T23:00:28Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-16T02:32:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CorticalStack/mistral-7b-neuralhermes-2.5-dpo
|
CorticalStack
| 2024-02-18T22:48:37Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"conversational",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T22:46:44Z |
---
license: apache-2.0
tags:
- dpo
dataset:
- Intel/orca_dpo_pairs
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
---
# mistral-7b-neuralhermes-2.5-dpo
mistral-7b-neuralhermes-2.5-dpo is a DPO fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset.
### LoRA
- r: 16
- LoRA alpha: 16
- LoRA dropout: 0.05
### Training arguments
- Batch size: 4
- Gradient accumulation steps: 4
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 5e-05
- Learning rate scheduler type: cosine
- Beta: 0.1
- Max prompt length: 1024
- Max length: 1536
|
aspanner/llama-2-7b-aiopsfinetuned-q8_0-gguf
|
aspanner
| 2024-02-18T22:42:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-18T21:56:45Z |
---
license: apache-2.0
---
This is the qantizied version of the original llama2 based model to download and run inference via a CPU
|
Iceman08/model
|
Iceman08
| 2024-02-18T22:31:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-18T22:20:01Z |
pip install 'langchain[llms]' huggingface-hub langchain transformers
|
dzagardo/quickstart_newdp_eps2
|
dzagardo
| 2024-02-18T22:29:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-18T22:27:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Fm505/dummy-model
|
Fm505
| 2024-02-18T22:27:30Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-08T16:09:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jdorairaj/Bert-Adapters
|
jdorairaj
| 2024-02-18T22:25:05Z | 2 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"bert",
"dataset:cola",
"region:us"
] | null | 2024-02-18T22:17:59Z |
---
tags:
- adapter-transformers
- bert
datasets:
- cola
---
# Adapter `jdorairaj/Bert-Adapters` for bert-base-uncased
An [adapter](https://adapterhub.ml) for the `bert-base-uncased` model that was trained on the [cola](https://huggingface.co/datasets/cola/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
adapter_name = model.load_adapter("jdorairaj/Bert-Adapters", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
mi-rei/clinical_trial_prediction_LLaMA
|
mi-rei
| 2024-02-18T22:20:00Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"arxiv:1910.09700",
"base_model:baffo32/decapoda-research-llama-7B-hf",
"base_model:adapter:baffo32/decapoda-research-llama-7B-hf",
"region:us"
] | null | 2024-02-12T17:34:49Z |
---
library_name: peft
base_model: baffo32/decapoda-research-llama-7B-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
mathieu1256/layoutlmv3-test-2
|
mathieu1256
| 2024-02-18T22:15:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:cord",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-18T17:43:14Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
datasets:
- cord
model-index:
- name: layoutlmv3-test-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-test-2
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0437 | 0.47 | 500 | 0.5549 |
| 0.0006 | 0.93 | 1000 | 0.6001 |
| 0.0005 | 1.4 | 1500 | 0.6243 |
| 0.0003 | 1.86 | 2000 | 0.6335 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.