modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 06:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Yura32000/Reinforce-cartpole_v1
|
Yura32000
| 2023-11-13T14:26:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T14:26:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
lmqg/mt5-base-zhquad-qg
|
lmqg
| 2023-11-13T14:20:10Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question generation",
"zh",
"dataset:lmqg/qg_zhquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-10T10:25:27Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: zh
datasets:
- lmqg/qg_zhquad
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
example_title: "Question Generation Example 1"
- text: "芝加哥大学的<hl> 1960—61 <hl>集团理论年汇集了Daniel Gorenstein、John G. Thompson和Walter Feit等团体理论家,奠定了一个合作的基础,借助于其他众多数学家的输入,1982中对所有有限的简单群进行了分类。这个项目的规模超过了以往的数学研究,无论是证明的长度还是研究人员的数量。目前正在进行研究,以简化这一分类的证明。如今,群论仍然是一个非常活跃的数学分支,影响着许多其他领域"
example_title: "Question Generation Example 2"
model-index:
- name: lmqg/mt5-base-zhquad-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_zhquad
type: default
args: default
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 14.73
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 34.72
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 23.92
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 77.38
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 57.5
---
# Model Card of `lmqg/mt5-base-zhquad-qg`
This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
- **Language:** zh
- **Training data:** [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="zh", model="lmqg/mt5-base-zhquad-qg")
# model prediction
questions = model.generate_q(list_context="南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。", list_answer="南安普敦中央")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mt5-base-zhquad-qg")
output = pipe("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-zhquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
| BERTScore | 77.38 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_1 | 37 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_2 | 25.9 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_3 | 19.25 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| Bleu_4 | 14.73 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| METEOR | 23.92 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| MoverScore | 57.5 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
| ROUGE_L | 34.72 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_zhquad
- dataset_name: default
- input_types: paragraph_answer
- output_types: question
- prefix_types: None
- model: google/mt5-base
- max_length: 512
- max_length_output: 32
- epoch: 16
- batch: 16
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-zhquad-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
pankajemplay/mistral_7b-instruct-intent
|
pankajemplay
| 2023-11-13T14:14:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2023-11-13T14:14:27Z |
---
library_name: peft
base_model: /kaggle/input/mistral/pytorch/7b-instruct-v0.1-hf/1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
|
mozart-ai/BAAI__bge-small-en-v1.5__Mozart_Fine_Tuned-10
|
mozart-ai
| 2023-11-13T14:01:47Z | 126 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-13T14:01:42Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 10762 with parameters:
```
{'batch_size': 2}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
HuyenNguyen/wav2vec2-large-mms-1b-vi-colab
|
HuyenNguyen
| 2023-11-13T14:00:22Z | 27 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_6_1",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-13T03:26:08Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
datasets:
- common_voice_6_1
metrics:
- wer
model-index:
- name: wav2vec2-large-mms-1b-vi-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_6_1
type: common_voice_6_1
config: vi
split: test
args: vi
metrics:
- name: Wer
type: wer
value: 1.018646408839779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-mms-1b-vi-colab
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the common_voice_6_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0469
- Wer: 1.0186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 9.9124 | 2.78 | 100 | 6.0469 | 1.0186 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
vickt/LLama-chinese-med-chat-lora
|
vickt
| 2023-11-13T13:31:19Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-13T13:31:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Rishi-19/Profanity_Check_LemmatizedData
|
Rishi-19
| 2023-11-13T13:17:41Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T13:03:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Rishi-19/Profanity_Check_LemmatizedData
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rishi-19/Profanity_Check_LemmatizedData
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0658
- Validation Loss: 0.1598
- Train Accuracy: 0.9577
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3145, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2045 | 0.1395 | 0.9545 | 0 |
| 0.1078 | 0.1301 | 0.9588 | 1 |
| 0.0658 | 0.1598 | 0.9577 | 2 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nicotaroni/sentiment_analysis_
|
nicotaroni
| 2023-11-13T13:09:33Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-11-13T13:09:05Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nicotaroni/sentiment_analysis_
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nicotaroni/sentiment_analysis_")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
TheBloke/deepseek-coder-6.7B-instruct-AWQ
|
TheBloke
| 2023-11-13T12:58:45Z | 2,187 | 15 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:quantized:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2023-11-05T03:10:55Z |
---
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
inference: false
license: other
license_link: LICENSE
license_name: deepseek
model_creator: DeepSeek
model_name: Deepseek Coder 6.7B Instruct
model_type: deepseek
prompt_template: 'You are an AI programming assistant, utilizing the Deepseek Coder
model, developed by Deepseek Company, and you only answer questions related to computer
science. For politically sensitive questions, security and privacy issues, and other
non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Deepseek Coder 6.7B Instruct - AWQ
- Model creator: [DeepSeek](https://huggingface.co/deepseek-ai)
- Original model: [Deepseek Coder 6.7B Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
<!-- description start -->
## Description
This repo contains AWQ model files for [DeepSeek's Deepseek Coder 6.7B Instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-GGUF)
* [DeepSeek's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: DeepSeek
```
You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/deepseek-coder-6.7B-instruct-AWQ/tree/main) | 4 | 128 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 16384 | 3.89 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/deepseek-coder-6.7B-instruct-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `deepseek-coder-6.7B-instruct-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/deepseek-coder-6.7B-instruct-AWQ --quantization awq
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/deepseek-coder-6.7B-instruct-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/deepseek-coder-6.7B-instruct-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using AutoAWQ
### Install the AutoAWQ package
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later.
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### AutoAWQ example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/deepseek-coder-6.7B-instruct-AWQ"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
prompt = "Tell me about AI"
prompt_template=f'''You are an AI programming assistant, utilizing the Deepseek Coder model, developed by Deepseek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
{prompt}
### Response:
'''
print("*** Running model.generate:")
token_input = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
token_input,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("LLM output: ", text_output)
"""
# Inference should be possible with transformers pipeline as well in future
# But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
"""
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: DeepSeek's Deepseek Coder 6.7B Instruct
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# 32021 is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [agi_code@deepseek.com](mailto:agi_code@deepseek.com).
|
DContrerasF/q-FrozenLake-v1-4x4-noSlippery
|
DContrerasF
| 2023-11-13T12:53:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T12:53:40Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="DContrerasF/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BachNgoH/ZaloAI-deberta-v3
|
BachNgoH
| 2023-11-13T12:51:06Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2023-11-13T12:50:13Z |
---
tags:
- generated_from_trainer
model-index:
- name: tmp_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp_trainer
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.14.1
|
kg-09/lora-raw_photo-SSD-1B
|
kg-09
| 2023-11-13T12:50:58Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:segmind/SSD-1B",
"base_model:adapter:segmind/SSD-1B",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-11-13T05:47:34Z |
---
license: openrail++
base_model: segmind/SSD-1B
instance_prompt: r4w photo
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - jsram/lora-raw_photo-SSD-1B
These are LoRA adaption weights for segmind/SSD-1B. The weights were trained on r4w photo using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
abdelmageed/distilbert-base-uncased-distilled-clinc
|
abdelmageed
| 2023-11-13T12:26:53Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T12:06:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9416129032258065
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1005
- Accuracy: 0.9416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9034 | 1.0 | 318 | 0.5760 | 0.7342 |
| 0.45 | 2.0 | 636 | 0.2855 | 0.8784 |
| 0.2544 | 3.0 | 954 | 0.1801 | 0.9223 |
| 0.1773 | 4.0 | 1272 | 0.1399 | 0.93 |
| 0.1427 | 5.0 | 1590 | 0.1212 | 0.9329 |
| 0.1247 | 6.0 | 1908 | 0.1119 | 0.9384 |
| 0.1145 | 7.0 | 2226 | 0.1063 | 0.9419 |
| 0.1078 | 8.0 | 2544 | 0.1031 | 0.9419 |
| 0.1042 | 9.0 | 2862 | 0.1013 | 0.9410 |
| 0.102 | 10.0 | 3180 | 0.1005 | 0.9416 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
lifestylemedia/text_to_speech_backend
|
lifestylemedia
| 2023-11-13T12:23:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-10-19T12:17:08Z |
# TTS-RVC-API
Yes, we can use Coqui with RVC!
#Why combine the two frameworks? Coqui is a text-to-speech framework (vocoder and encoder), but cloning your own voice takes decades and offers no guarantee of better results. That's why we use RVC (Retrieval-Based Voice Conversion), which works only for speech-to-speech. You can train the model with just 2-3 minutes of dataset as it uses Hubert (a pre-trained model to fine-tune quickly and provide better results).
## Installation
How to use Coqui + RVC api?
```python
https://github.com/skshadan/TTS-RVC-API.git
```
```python
python -m venv .venv
. .venv/bin/activate
pip install -r requirements.txt
pip install TTS
python -m uvicorn app.main:app
```
Now update `config.toml` with relative paths
config `model_dir` path or set a `speaker_name` in the request body
Where the RVC v2 model is mounted on the container at:
```python
/
└── models
└── speaker1
├── speaker1.pth
└── speaker1.index
```
Now Run this
```python
python -m uvicorn app.main:app
```
## POST REQUEST
```python
http://localhost:8000/generate
```
```python
emotions : happy,sad,angry,dull
speed = 1.0 - 2.0
```
```python
{
"speaker_name": "speaker3",
"input_text": "Hey there! Welcome to the world",
"emotion": "Surprise",
"speed": 1.0
}
```
# CODE SNIPPET
```python
import requests
import json
import time
url = "http://127.0.0.1:8000/generate"
payload = json.dumps({
"speaker_name": "speaker3",
"input_text": "Are you mad? The way you've betrayed me is beyond comprehension, a slap in the face that's left me boiling with an anger so intense it's as if you've thrown gasoline on a fire, utterly destroying any trust that was left.",
"emotion": "Dull",
"speed": 1.0
})
headers = {
'Content-Type': 'application/json'
}
start_time = time.time() # Start the timer
response = requests.request("POST", url, headers=headers, data=payload)
end_time = time.time() # Stop the timer
if response.status_code == 200:
audio_content = response.content
# Save the audio to a file
with open("generated_audio.wav", "wb") as audio_file:
audio_file.write(audio_content)
print("Audio saved successfully.")
print("Time taken:", end_time - start_time, "seconds")
else:
print("Error:", response.text)
```
## Feedback
If you have any feedback, issues please reach out to shadankhantech@gmail.com
|
lizhuang144/flan-t5-base-VG-factual-sg-id
|
lizhuang144
| 2023-11-13T12:08:05Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-12T11:50:23Z |
Please see https://github.com/zhuang-li/FACTUAL for detailed description of this model.
|
lizhuang144/flan-t5-small-VG-factual-sg-id
|
lizhuang144
| 2023-11-13T12:03:28Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-09-12T11:49:57Z |
Please see https://github.com/zhuang-li/FACTUAL for detailed description of this model.
|
ZivK/ppo-Huggy
|
ZivK
| 2023-11-13T11:31:31Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-13T11:31:26Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ZivK/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
VoidZeroe/llama3.1-model
|
VoidZeroe
| 2023-11-13T11:15:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-13T11:13:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
learn3r/longt5_xl_gov_bp_15
|
learn3r
| 2023-11-13T11:09:25Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/gov_report_bp",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-12T01:39:17Z |
---
base_model: /exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_bp_10/checkpoint-680
tags:
- generated_from_trainer
datasets:
- learn3r/gov_report_bp
model-index:
- name: longt5_xl_gov_bp_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_gov_bp_15
This model is a fine-tuned version of [/exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_bp_10/checkpoint-680](https://huggingface.co//exports/eddie/scratch/s1970716/models/summarization/longt5_xl_gov_bp_10/checkpoint-680) on the learn3r/gov_report_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7126
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2051 | 1.0 | 136 | 1.7126 |
| 0.1732 | 1.99 | 272 | 1.8857 |
| 0.1777 | 3.0 | 409 | 1.9036 |
| 0.1122 | 4.0 | 545 | 1.9538 |
| 0.1098 | 4.99 | 680 | 2.1134 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
abdelmageed/distilbert-base-uncased-finetuned-clinc
|
abdelmageed
| 2023-11-13T11:02:58Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-12T20:38:40Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9164516129032259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7725
- Accuracy: 0.9165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2924 | 1.0 | 318 | 3.2763 | 0.7284 |
| 2.6141 | 2.0 | 636 | 1.8625 | 0.8365 |
| 1.5389 | 3.0 | 954 | 1.1513 | 0.8984 |
| 1.0087 | 4.0 | 1272 | 0.8540 | 0.9135 |
| 0.793 | 5.0 | 1590 | 0.7725 | 0.9165 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
soongbren/fine-tuned-tiny-bert-base-uncased-large-dataset
|
soongbren
| 2023-11-13T10:57:30Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:patrickxchong/bert-tiny-bahasa-cased-sentiment",
"base_model:finetune:patrickxchong/bert-tiny-bahasa-cased-sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-06T14:24:37Z |
---
license: apache-2.0
base_model: patrickxchong/bert-tiny-bahasa-cased-sentiment
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-tiny-bert-base-uncased-large-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-tiny-bert-base-uncased-large-dataset
This model is a fine-tuned version of [patrickxchong/bert-tiny-bahasa-cased-sentiment](https://huggingface.co/patrickxchong/bert-tiny-bahasa-cased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0000
- eval_accuracy: {'accuracy': 1.0}
- eval_f1score: {'f1': 1.0}
- eval_runtime: 7.9696
- eval_samples_per_second: 115.188
- eval_steps_per_second: 14.43
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 642
- num_epochs: 7
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
koflynn/distilbert-base-uncased-finetuned-squad
|
koflynn
| 2023-11-13T10:52:23Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-10T20:50:40Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2183 | 1.0 | 5533 | 1.1680 |
| 0.9657 | 2.0 | 11066 | 1.1209 |
| 0.7457 | 3.0 | 16599 | 1.1541 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
LI-ST/Mistral-7B-ko-v0.1
|
LI-ST
| 2023-11-13T10:50:19Z | 2,215 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"ko",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-13T10:28:54Z |
---
license: cc-by-nc-nd-4.0
language:
- en
- ko
library_name: transformers
pipeline_tag: text-generation
---
<p><h1>Mistral-7B-ko-v0.1</h1></p>
basemodel: Open-Orca/Mistral-7B-OpenOrca
|
RaThorat/en_grantss
|
RaThorat
| 2023-11-13T10:38:07Z | 3 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-12-13T15:54:51Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_grantss
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.769098972
- name: NER Recall
type: recall
value: 0.6617812852
- name: NER F Score
type: f_score
value: 0.7114156528
---
## Introduction
Three variants of the model is built with Spacy3 for grant applications.
A simple named entity recognition custom model from scratch with annotation tool prodi.gy.
Github info: https://github.com/RaThorat/ner_model_prodigy
The most general model is 'en_grantss'. The model en_ncv is more suitable to extract entities from narrative CV's.
The model en_grant is the first model in the series.
| Feature | Description |
| --- | --- |
| **Name** | `en_grantss` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | research grant applications |
| **License** | n/a |
| **Author** | [Rahul Thorat]() |
### Label Scheme
<details>
<summary>View label scheme (18 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `ACTIVITY`, `DISCIPLINE`, `EVENT`, `GPE`, `JOURNAL`, `KEYWORD`, `LICENSE`, `MEDIUM`, `METASTD`, `MONEY`, `ORG`, `PERSON`, `POSITION`, `PRODUCT`, `RECOGNITION`, `REF`, `REPOSITORY`, `WEBSITE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 71.14 |
| `ENTS_P` | 76.91 |
| `ENTS_R` | 66.18 |
| `TOK2VEC_LOSS` | 1412244.09 |
| `NER_LOSS` | 1039417.96 |
|
GGbond-No1/Taxi
|
GGbond-No1
| 2023-11-13T10:17:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T10:17:18Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.44 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GGbond-No1/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mohan007/Qwen-VL-Chat-Int4
|
mohan007
| 2023-11-13T10:15:02Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2308.12966",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2023-11-13T10:15:01Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
---
# Qwen-VL-Chat-Int4
<br>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_vl.jpg" width="400"/>
<p>
<br>
<p align="center">
Qwen-VL <a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖 <a> | <a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>  | Qwen-VL-Chat <a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖 <a>| <a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>  | Qwen-VL-Chat-Int4 <a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
<br>
<a href="assets/wechat.png">WeChat</a>   |   <a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   <a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">Demo</a>  |  <a href="https://arxiv.org/abs/2308.12966">Report</a>
</p>
<br>
**Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat的量化模型Qwen-VL-Chat-Int4仓库。
We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat-Int4.
<br>
## 安装要求 (Requirements)
* python 3.8及以上版本
* pytorch2.0及以上版本
* 建议使用CUDA 11.4及以上
* python 3.8 and above
* pytorch 2.0 and above are recommended
* CUDA 11.4 and above are recommended
<br>
## 快速开始 (Quickstart)
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat-Int4。
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
Below, we provide simple examples to show how to use Qwen-VL-Chat-Int4 with 🤗 Transformers.
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
pip install optimum
git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
pip install -v .
```
接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
#### 🤗 Transformers
To use Qwen-VL-Chat-Int4 for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat-Int4", trust_remote_code=True)
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat-Int4", device_map="cuda", trust_remote_code=True).eval()
# 1st dialogue turn
query = tokenizer.from_list_format([
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'text': '这是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
# 2nd dialogue turn
response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
print(response)
# <ref>击掌</ref><box>(517,508),(589,611)</box>
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
image.save('1.jpg')
else:
print("no box")
```
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
<p>
<br>
## 量化 (Quantization)
### 效果评测 (Performance)
我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | ZH. | EN |
| ------------ | :--------: | :-----------: |
| BF16 | 401.2 | 645.2 |
| Int4 | 386.6 | 651.4 |
### 推理速度 (Inference Speed)
我们测算了在输入一张图片(即258个token)的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
| Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------ | :-----------------: | :-----------------: |
| BF16 | 28.87 | 24.32 |
| Int4 | 37.79 | 34.34 |
推理速度测算是在单卡 A100-SXM4-80G GPU上运行,使用PyTorch 2.0.1及CUDA 11.4。
The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
### GPU显存占用 (GPU Memory Usage)
我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示:
We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
| Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 22.60GB | 28.01GB |
| Int4 | 11.82GB | 17.23GB |
上述速度和显存测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
<br>
## 评测
我们从两个角度评测了两个模型的能力:
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
- General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
- Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
- Referring Expression Compression:评测模型给定物体描述画检测框的能力;
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
- 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
- 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
- 评测同时包含英文版本和中文版本。
评测结果如下:
We evaluated the model's ability from two perspectives:
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
- The benchmark includes both English and Chinese versions.
The results of the evaluation are as follows:
Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
<p>
### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="2">Zero-shot Captioning</th>
<th colspan="5">General VQA</th>
</tr>
<tr>
<th>NoCaps</th>
<th>Flickr30K</th>
<th>VQAv2<sup>dev</sup></th>
<th>OK-VQA</th>
<th>GQA</th>
<th>SciQA-Img<br>(0-shot)</th>
<th>VizWiz<br>(0-shot)</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="10">Generalist<br>Models</td>
<td>Flamingo-9B</td>
<td>-</td>
<td>61.5</td>
<td>51.8</td>
<td>44.7</td>
<td>-</td>
<td>-</td>
<td>28.8</td>
</tr>
<tr>
<td>Flamingo-80B</td>
<td>-</td>
<td>67.2</td>
<td>56.3</td>
<td>50.6</td>
<td>-</td>
<td>-</td>
<td>31.6</td>
</tr>
<tr>
<td>Unified-IO-XL</td>
<td>100.0</td>
<td>-</td>
<td>77.9</td>
<td>54.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Kosmos-1</td>
<td>-</td>
<td>67.1</td>
<td>51.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>29.2</td>
</tr>
<tr>
<td>Kosmos-2</td>
<td>-</td>
<td>66.7</td>
<td>45.6</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>BLIP-2 (Vicuna-13B)</td>
<td>103.9</td>
<td>71.6</td>
<td>65.0</td>
<td>45.9</td>
<td>32.3</td>
<td>61.0</td>
<td>19.6</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td><strong>121.9</strong></td>
<td>82.8</td>
<td>-</td>
<td>-</td>
<td>49.5</td>
<td>63.1</td>
<td>33.4</td>
</tr>
<tr>
<td>Shikra (Vicuna-13B)</td>
<td>-</td>
<td>73.9</td>
<td>77.36</td>
<td>47.16</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td><strong>Qwen-VL (Qwen-7B)</strong></td>
<td>121.4</td>
<td><b>85.8</b></td>
<td><b>78.8</b></td>
<td><b>58.6</b></td>
<td><b>59.3</b></td>
<td>67.1</td>
<td>35.2</td>
</tr>
<!-- <tr>
<td>Qwen-VL (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>63.6</td>
<td>-</td>
<td>-</td>
<td>39.1</td>
</tr> -->
<tr>
<td>Qwen-VL-Chat</td>
<td>120.2</td>
<td>81.0</td>
<td>78.2</td>
<td>56.6</td>
<td>57.5</td>
<td><b>68.2</b></td>
<td><b>38.9</b></td>
</tr>
<!-- <tr>
<td>Qwen-VL-Chat (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>60.6</td>
<td>-</td>
<td>-</td>
<td>44.45</td>
</tr> -->
<tr>
<td>Previous SOTA<br>(Per Task Fine-tuning)</td>
<td>-</td>
<td>127.0<br>(PALI-17B)</td>
<td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
<td>86.1<br>(PALI-X<br>-55B)</td>
<td>66.1<br>(PALI-X<br>-55B)</td>
<td>72.1<br>(CFR)</td>
<td>92.53<br>(LLaVa+<br>GPT-4)</td>
<td>70.9<br>(PALI-X<br>-55B)</td>
</tr>
</tbody>
</table>
- 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
- 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
### 文本导向的视觉问答 (Text-oriented VQA)
<table>
<thead>
<tr>
<th>Model type</th>
<th>Model</th>
<th>TextVQA</th>
<th>DocVQA</th>
<th>ChartQA</th>
<th>AI2D</th>
<th>OCR-VQA</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="5">Generalist Models</td>
<td>BLIP-2 (Vicuna-13B)</td>
<td>42.4</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td>50.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>mPLUG-DocOwl (LLaMA-7B)</td>
<td>52.6</td>
<td>62.2</td>
<td>57.4</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Pic2Struct-Large (1.3B)</td>
<td>-</td>
<td><b>76.6</b></td>
<td>58.6</td>
<td>42.1</td>
<td>71.3</td>
</tr>
<tr>
<td>Qwen-VL (Qwen-7B)</td>
<td><b>63.8</b></td>
<td>65.1</td>
<td><b>65.7</b></td>
<td><b>62.3</b></td>
<td><b>75.7</b></td>
</tr>
<tr>
<td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
<td>71.44</td>
<td>80.0</td>
<td>70.0</td>
<td>81.2</td>
<td>75.0</td>
</tr>
</tbody>
</table>
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
### 细粒度视觉定位 (Referring Expression Comprehension)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="3">RefCOCO</th>
<th colspan="3">RefCOCO+</th>
<th colspan="2">RefCOCOg</th>
<th>GRIT</th>
</tr>
<tr>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val-u</th>
<th>test-u</th>
<th>refexp</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="8">Generalist Models</td>
<td>GPV-2</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>51.50</td>
</tr>
<tr>
<td>OFA-L*</td>
<td>79.96</td>
<td>83.67</td>
<td>76.39</td>
<td>68.29</td>
<td>76.00</td>
<td>61.75</td>
<td>67.57</td>
<td>67.58</td>
<td>61.70</td>
</tr>
<tr>
<td>Unified-IO</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>78.61</b></td>
</tr>
<tr>
<td>VisionLLM-H</td>
<td></td>
<td>86.70</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Shikra-7B</td>
<td>87.01</td>
<td>90.61</td>
<td>80.24 </td>
<td>81.60</td>
<td>87.36</td>
<td>72.12</td>
<td>82.27</td>
<td>82.19</td>
<td>69.34</td>
</tr>
<tr>
<td>Shikra-13B</td>
<td>87.83 </td>
<td>91.11</td>
<td>81.81</td>
<td>82.89</td>
<td>87.79</td>
<td>74.41</td>
<td>82.64</td>
<td>83.16</td>
<td>69.03</td>
</tr>
<tr>
<td>Qwen-VL-7B</td>
<td><b>89.36</b></td>
<td>92.26</td>
<td><b>85.34</b></td>
<td><b>83.12</b></td>
<td>88.25</td>
<td><b>77.21</b></td>
<td>85.58</td>
<td>85.48</td>
<td>78.22</td>
</tr>
<tr>
<td>Qwen-VL-7B-Chat</td>
<td>88.55</td>
<td><b>92.27</b></td>
<td>84.51</td>
<td>82.82</td>
<td><b>88.59</b></td>
<td>76.79</td>
<td><b>85.96</b></td>
<td><b>86.32</b></td>
<td>-</td>
<tr>
<td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>G-DINO-L</td>
<td>90.56 </td>
<td>93.19</td>
<td>88.24</td>
<td>82.75</td>
<td>88.95</td>
<td>75.92</td>
<td>86.13</td>
<td>87.02</td>
<td>-</td>
</tr>
<tr>
<td>UNINEXT-H</td>
<td>92.64 </td>
<td>94.33</td>
<td>91.46</td>
<td>85.24</td>
<td>89.63</td>
<td>79.79</td>
<td>88.73</td>
<td>89.37</td>
<td>-</td>
</tr>
<tr>
<td>ONE-PEACE</td>
<td>92.58 </td>
<td>94.18</td>
<td>89.26</td>
<td>88.77</td>
<td>92.21</td>
<td>83.23</td>
<td>89.22</td>
<td>89.27</td>
<td>-</td>
</tr>
</tbody>
</table>
- 在定位任务上,Qwen-VL 全面超过 Shikra-13B,取得了目前 Generalist LVLM 模型上在 Refcoco 上的 **SOTA**。
- Qwen-VL 并没有在任何中文定位数据上训练过,但通过中文 Caption 数据和 英文 Grounding 数据的训练,可以 Zero-shot 泛化出中文 Grounding 能力。
我们提供了以上**所有**评测脚本以供复现我们的实验结果。请阅读 [eval/EVALUATION.md](eval/EVALUATION.md) 了解更多信息。
- Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
- Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
### 闲聊能力测评 (Chat Evaluation)
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
#### 英语 (English)
| Model | Score |
|---------------|-------|
| PandaGPT | 488.5 |
| MiniGPT4 | 531.7 |
| InstructBLIP | 552.4 |
| LLaMA-AdapterV2 | 590.1 |
| mPLUG-Owl | 605.4 |
| LLaVA | 602.7 |
| Qwen-VL-Chat | 645.2 |
#### 中文 (Chinese)
| Model | Score |
|---------------|-------|
| VisualGLM | 247.1 |
| Qwen-VL-Chat | 401.2 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
<br>
## 常见问题 (FAQ)
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 使用协议 (License Agreement)
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
<br>
## 引用 (Citation)
如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen-VL,
title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
<br>
## 联系我们 (Contact Us)
如果你想给我们的研发团队和产品团队留言,请通过邮件(qianwen_opensource@alibabacloud.com)联系我们。
If you are interested to leave a message to either our research team or product team, feel free to send an email to qianwen_opensource@alibabacloud.com.
```
```
|
soongbren/electra-base-discriminator-bahasa-cased
|
soongbren
| 2023-11-13T10:08:47Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"electra",
"text-classification",
"generated_from_trainer",
"base_model:mesolitica/electra-base-discriminator-bahasa-cased",
"base_model:finetune:mesolitica/electra-base-discriminator-bahasa-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T10:08:30Z |
---
base_model: mesolitica/electra-base-discriminator-bahasa-cased
tags:
- generated_from_trainer
model-index:
- name: electra-base-discriminator-bahasa-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-discriminator-bahasa-cased
This model is a fine-tuned version of [mesolitica/electra-base-discriminator-bahasa-cased](https://huggingface.co/mesolitica/electra-base-discriminator-bahasa-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6106
- eval_accuracy: {'accuracy': 0.696078431372549}
- eval_f1score: {'f1': 0.6848912520917027}
- eval_runtime: 30.6032
- eval_samples_per_second: 29.997
- eval_steps_per_second: 3.758
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 642
- num_epochs: 7
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Adminhuggingface/DBO
|
Adminhuggingface
| 2023-11-13T10:03:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T07:48:51Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: Different photos of a Ramcharn person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Adminhuggingface/DBO
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on Different photos of a Ramcharn person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
ibrahim-601/mit-b0-building-damage-lora
|
ibrahim-601
| 2023-11-13T10:01:42Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] | null | 2023-11-04T15:21:35Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- generated_from_trainer
model-index:
- name: mit-b0-building-damage-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b0-building-damage-lora
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0661
- Mean Iou: 0.3623
- Mean Accuracy: 0.7245
- Overall Accuracy: 0.7245
- Accuracy Building: 0.7245
- Iou Building: 0.7245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Building | Iou Building |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:------------:|
| 0.0618 | 1.0 | 700 | 0.1463 | 0.4063 | 0.8125 | 0.8125 | 0.8125 | 0.8125 |
| 0.0813 | 2.0 | 1400 | 0.0861 | 0.3950 | 0.7900 | 0.7900 | 0.7900 | 0.7900 |
| 0.0715 | 3.0 | 2100 | 0.0856 | 0.3844 | 0.7689 | 0.7689 | 0.7689 | 0.7689 |
| 0.076 | 4.0 | 2800 | 0.1296 | 0.4161 | 0.8322 | 0.8322 | 0.8322 | 0.8322 |
| 0.0587 | 5.0 | 3500 | 0.0702 | 0.3078 | 0.6156 | 0.6156 | 0.6156 | 0.6156 |
| 0.0662 | 6.0 | 4200 | 0.0708 | 0.3613 | 0.7226 | 0.7226 | 0.7226 | 0.7226 |
| 0.059 | 7.0 | 4900 | 0.1063 | 0.4125 | 0.8249 | 0.8249 | 0.8249 | 0.8249 |
| 0.0532 | 8.0 | 5600 | 0.0693 | 0.3547 | 0.7094 | 0.7094 | 0.7094 | 0.7094 |
| 0.066 | 9.0 | 6300 | 0.0754 | 0.3932 | 0.7863 | 0.7863 | 0.7863 | 0.7863 |
| 0.0628 | 10.0 | 7000 | 0.0692 | 0.3874 | 0.7747 | 0.7747 | 0.7747 | 0.7747 |
| 0.0805 | 11.0 | 7700 | 0.0701 | 0.3896 | 0.7793 | 0.7793 | 0.7793 | 0.7793 |
| 0.0595 | 12.0 | 8400 | 0.0663 | 0.3774 | 0.7549 | 0.7549 | 0.7549 | 0.7549 |
| 0.0705 | 13.0 | 9100 | 0.0653 | 0.3717 | 0.7433 | 0.7433 | 0.7433 | 0.7433 |
| 0.071 | 14.0 | 9800 | 0.0651 | 0.3731 | 0.7461 | 0.7461 | 0.7461 | 0.7461 |
| 0.0656 | 15.0 | 10500 | 0.0648 | 0.3613 | 0.7227 | 0.7227 | 0.7227 | 0.7227 |
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
GGbond-No1/q-FrozenLake-v1-4x4-noSlippery
|
GGbond-No1
| 2023-11-13T09:59:52Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T09:59:47Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="GGbond-No1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AntoineD/camembert_ccnet_classification_tools_classifier-only_fr-p0.2
|
AntoineD
| 2023-11-13T09:58:56Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base-ccnet",
"base_model:finetune:almanach/camembert-base-ccnet",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-10T16:52:10Z |
---
base_model: camembert/camembert-base-ccnet
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: camembert_ccnet_classification_tools_classifier-only_fr-p0.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_ccnet_classification_tools_classifier-only_fr-p0.2
This model is a fine-tuned version of [camembert/camembert-base-ccnet](https://huggingface.co/camembert/camembert-base-ccnet) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2525
- Accuracy: 0.975
- Learning Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 24
- eval_batch_size: 192
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 7 | 1.8405 | 0.3 | 0.0010 |
| No log | 2.0 | 14 | 1.5880 | 0.425 | 0.0010 |
| No log | 3.0 | 21 | 1.2827 | 0.725 | 0.0009 |
| No log | 4.0 | 28 | 1.1343 | 0.7 | 0.0009 |
| No log | 5.0 | 35 | 1.0062 | 0.7 | 0.0009 |
| No log | 6.0 | 42 | 0.8774 | 0.75 | 0.0009 |
| No log | 7.0 | 49 | 0.8032 | 0.75 | 0.0009 |
| No log | 8.0 | 56 | 0.7309 | 0.8 | 0.0009 |
| No log | 9.0 | 63 | 0.6953 | 0.8 | 0.0008 |
| No log | 10.0 | 70 | 0.6711 | 0.725 | 0.0008 |
| No log | 11.0 | 77 | 0.5830 | 0.9 | 0.0008 |
| No log | 12.0 | 84 | 0.6111 | 0.775 | 0.0008 |
| No log | 13.0 | 91 | 0.6019 | 0.8 | 0.0008 |
| No log | 14.0 | 98 | 0.4941 | 0.9 | 0.0008 |
| No log | 15.0 | 105 | 0.4901 | 0.875 | 0.0008 |
| No log | 16.0 | 112 | 0.4450 | 0.9 | 0.0007 |
| No log | 17.0 | 119 | 0.5169 | 0.775 | 0.0007 |
| No log | 18.0 | 126 | 0.4281 | 0.9 | 0.0007 |
| No log | 19.0 | 133 | 0.4314 | 0.875 | 0.0007 |
| No log | 20.0 | 140 | 0.4408 | 0.85 | 0.0007 |
| No log | 21.0 | 147 | 0.3775 | 0.825 | 0.0007 |
| No log | 22.0 | 154 | 0.3641 | 0.875 | 0.0006 |
| No log | 23.0 | 161 | 0.3698 | 0.925 | 0.0006 |
| No log | 24.0 | 168 | 0.3470 | 0.925 | 0.0006 |
| No log | 25.0 | 175 | 0.3649 | 0.9 | 0.0006 |
| No log | 26.0 | 182 | 0.3400 | 0.95 | 0.0006 |
| No log | 27.0 | 189 | 0.3451 | 0.925 | 0.0006 |
| No log | 28.0 | 196 | 0.3841 | 0.875 | 0.0005 |
| No log | 29.0 | 203 | 0.3141 | 0.95 | 0.0005 |
| No log | 30.0 | 210 | 0.3150 | 0.95 | 0.0005 |
| No log | 31.0 | 217 | 0.3493 | 0.9 | 0.0005 |
| No log | 32.0 | 224 | 0.3115 | 0.95 | 0.0005 |
| No log | 33.0 | 231 | 0.3133 | 0.95 | 0.0005 |
| No log | 34.0 | 238 | 0.3169 | 0.925 | 0.0004 |
| No log | 35.0 | 245 | 0.3054 | 0.95 | 0.0004 |
| No log | 36.0 | 252 | 0.2951 | 0.975 | 0.0004 |
| No log | 37.0 | 259 | 0.3018 | 0.9 | 0.0004 |
| No log | 38.0 | 266 | 0.2918 | 0.9 | 0.0004 |
| No log | 39.0 | 273 | 0.2817 | 0.95 | 0.0003 |
| No log | 40.0 | 280 | 0.2723 | 0.95 | 0.0003 |
| No log | 41.0 | 287 | 0.2618 | 0.95 | 0.0003 |
| No log | 42.0 | 294 | 0.2779 | 0.95 | 0.0003 |
| No log | 43.0 | 301 | 0.2806 | 0.95 | 0.0003 |
| No log | 44.0 | 308 | 0.2560 | 0.95 | 0.0003 |
| No log | 45.0 | 315 | 0.2566 | 0.95 | 0.0003 |
| No log | 46.0 | 322 | 0.2543 | 0.95 | 0.0002 |
| No log | 47.0 | 329 | 0.2784 | 0.975 | 0.0002 |
| No log | 48.0 | 336 | 0.2974 | 0.925 | 0.0002 |
| No log | 49.0 | 343 | 0.2755 | 0.975 | 0.0002 |
| No log | 50.0 | 350 | 0.2532 | 0.95 | 0.0002 |
| No log | 51.0 | 357 | 0.2495 | 0.95 | 0.0001 |
| No log | 52.0 | 364 | 0.2700 | 0.95 | 0.0001 |
| No log | 53.0 | 371 | 0.2808 | 0.95 | 0.0001 |
| No log | 54.0 | 378 | 0.2848 | 0.975 | 0.0001 |
| No log | 55.0 | 385 | 0.2728 | 0.975 | 0.0001 |
| No log | 56.0 | 392 | 0.2646 | 0.975 | 0.0001 |
| No log | 57.0 | 399 | 0.2592 | 0.975 | 5e-05 |
| No log | 58.0 | 406 | 0.2561 | 0.975 | 0.0000 |
| No log | 59.0 | 413 | 0.2525 | 0.975 | 0.0000 |
| No log | 60.0 | 420 | 0.2525 | 0.975 | 0.0 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
satyajeetbhonsale/RL_trial
|
satyajeetbhonsale
| 2023-11-13T09:55:21Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T09:55:12Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: RL_trial
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
VoidZeroe/llama3.0-model
|
VoidZeroe
| 2023-11-13T09:48:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-13T09:47:35Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Yntec/pineappleAnimeMix
|
Yntec
| 2023-11-13T09:40:46Z | 3,369 | 7 |
diffusers
|
[
"diffusers",
"safetensors",
"Anime",
"Base Model",
"Female",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"pmango300574",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T08:34:17Z |
---
license: creativeml-openrail-m
library_name: diffusers
language:
- en
tags:
- Anime
- Base Model
- Female
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- text-to-image
- pmango300574
pipeline_tag: text-to-image
---
# Pineapple Anime Mix
Original page: https://civitai.com/models/190067/pineapple-anime-mix
Sample and prompt:

masterpiece, Cartoon Pretty CUTE LITTLE Girl, sitting on a box of CANDLES, DETAILED CHIBI EYES, holding candle, gorgeous detailed hair, Ponytail, Magazine ad, iconic, 1940, sharp focus. Illustration By ROSSDRAWS and KlaysMoji and Dave Rapoza and artgerm and leyendecker and Clay Mann
|
BlueNipples/TimeCrystal-l2-13B
|
BlueNipples
| 2023-11-13T09:40:06Z | 1,659 | 16 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2",
"roleplaying",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-11T11:09:16Z |
---
license: apache-2.0
tags:
- llama-2
- roleplaying
---
This 13B model, TimeCrystal-l2-13B is built to maximize logic and instruct following, whilst also increasing the vividness of prose found in Chronos based models like Mythomax, over the more romantic prose, hopefully without losing the elegent narrative structure touch of newer models like synthia and xwin. TLDR: Attempt at more clever, better prose.
Tentative test results: I'm not certain if logic/instruct was improved or not (haven't tested much), but the prose infusion seems to have worked really well.
It is built so:
SLERPS:
Amethyst + Openchat Super = OpenStone
MythoMax + Chronos = ChronoMax
ChronoMax + Amethyst = TimeStone
Gradient Merge:
TimeStone + OpenStone (0.9,0,0) = TimeCrystal
Props to all the mergers, fine tuners!
All models in Merge: Many, lol.
|
cp-cp/sd-class-butterflies
|
cp-cp
| 2023-11-13T09:36:14Z | 0 | 1 |
diffusers
|
[
"diffusers",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2023-11-12T15:17:01Z |
---
{}
---
Hi
```
python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('cp-cp/sd-class-butterflies')
pipeline.to("cuda")
image = pipeline ().images [0]
image
```
|
meiyun1995/PPO-LunarLander-v2
|
meiyun1995
| 2023-11-13T09:30:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-09T10:06:50Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 249.00 +/- 18.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Denyol/FakeNews-roberta-large-stable
|
Denyol
| 2023-11-13T09:30:17Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T08:44:59Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: FakeNews-roberta-large-stable
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# FakeNews-roberta-large-stable
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1757
- Accuracy: 0.9668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4307 | 1.0 | 802 | 0.3262 | 0.9350 |
| 0.2795 | 2.0 | 1605 | 0.4021 | 0.8748 |
| 0.2748 | 3.0 | 2407 | 0.2066 | 0.9593 |
| 0.205 | 4.0 | 3210 | 0.2425 | 0.9449 |
| 0.117 | 5.0 | 4010 | 0.1757 | 0.9668 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Codekirk42/fR95FurryStyleModel
|
Codekirk42
| 2023-11-13T09:26:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-11-13T09:09:34Z |
---
license: creativeml-openrail-m
---
|
Adarshiniaddy-17/peacock
|
Adarshiniaddy-17
| 2023-11-13T09:07:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T09:00:41Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Peacock Dreambooth model trained by Adarshiniaddy-17 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: PIETW-97
Sample pictures of this concept:
.jpeg)
|
morris-chang/SalesBot1_CoT_Lora_add_thought_baseline
|
morris-chang
| 2023-11-13T09:06:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-13T08:21:56Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
soongbren/emotion-analysis-nanot5-small-malaysian-cased
|
soongbren
| 2023-11-13T09:03:02Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:patrickxchong/bert-tiny-bahasa-cased-sentiment",
"base_model:finetune:patrickxchong/bert-tiny-bahasa-cased-sentiment",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-11T05:47:42Z |
---
license: apache-2.0
base_model: patrickxchong/bert-tiny-bahasa-cased-sentiment
tags:
- generated_from_trainer
model-index:
- name: emotion-analysis-nanot5-small-malaysian-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-analysis-nanot5-small-malaysian-cased
This model is a fine-tuned version of [patrickxchong/bert-tiny-bahasa-cased-sentiment](https://huggingface.co/patrickxchong/bert-tiny-bahasa-cased-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.9733
- eval_accuracy: {'accuracy': 0.7276688453159041}
- eval_f1score: {'f1': 0.7182408591455907}
- eval_runtime: 6.6242
- eval_samples_per_second: 138.583
- eval_steps_per_second: 17.361
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 642
- num_epochs: 7
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
abdiharyadi/IndoT5-base-nafkhan-epochs-4
|
abdiharyadi
| 2023-11-13T08:59:04Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:abdiharyadi/IndoT5-base-nafkhan-epochs-3",
"base_model:finetune:abdiharyadi/IndoT5-base-nafkhan-epochs-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-13T01:40:17Z |
---
base_model: abdiharyadi/IndoT5-base-nafkhan-epochs-3
tags:
- generated_from_trainer
model-index:
- name: IndoT5-base-nafkhan-epochs-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IndoT5-base-nafkhan-epochs-4
This model is a fine-tuned version of [abdiharyadi/IndoT5-base-nafkhan-epochs-3](https://huggingface.co/abdiharyadi/IndoT5-base-nafkhan-epochs-3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 342
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 200
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.1692 | 1.0 | 23217 | 0.1608 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Balachandar/Mistral-7B-Instruct-v0.1-sharded-fine-tuned-adapters-V1
|
Balachandar
| 2023-11-13T08:57:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2023-11-13T08:46:43Z |
---
library_name: peft
base_model: bn22/Mistral-7B-Instruct-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
minh009/classification-review-1
|
minh009
| 2023-11-13T08:50:38Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T08:29:12Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: minh009/classification-review-1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# minh009/classification-review-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1963
- Validation Loss: 0.4600
- Train Accuracy: 0.9091
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 220, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3703 | 1.1197 | 0.8182 | 0 |
| 0.6976 | 0.6242 | 0.8523 | 1 |
| 0.3556 | 0.4973 | 0.8977 | 2 |
| 0.2399 | 0.4576 | 0.8977 | 3 |
| 0.1963 | 0.4600 | 0.9091 | 4 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Prema-58/panda
|
Prema-58
| 2023-11-13T08:50:17Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T08:46:24Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### PANDA Dreambooth model trained by Prema-58 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: PIETW-491
Sample pictures of this concept:
.jpg.jpg)
|
QQhahaha/QuestionAnswering
|
QQhahaha
| 2023-11-13T08:30:54Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"zh",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-10T12:42:33Z |
---
language:
- zh
---
# Question Answering
This is a assignment of Applied Deep Learning which is a course of National Taiwan University(NTU).
### Task Description:Chinese Extractive Question Answering (QA)
Determine the start and end position of the answer span.
input(question):
```
在關西鎮以什麼方言為主?
```
input(text)
```
新竹縣是中華民國臺灣省的縣,位於臺灣本島西北部,北臨桃園市,南接苗栗縣,東南以雪山山脈與宜蘭縣、臺中市相連,西部面向台灣海峽,西接與新竹市交界。全縣總面積約1,427平方公里,除鳳山溪、頭前溪中下游沖積平原外,其餘大多為丘陵、台地及山地。早期新竹縣郊區多務農,1970年代工業技術研究院創設於新竹市,1980年代新竹科學工業園區設立於新竹市東區及新竹縣寶山鄉,1990年代位於湖口鄉的新竹工業區也逐漸從傳統產業聚落轉型為新興高科技產業聚落,使得新竹縣成為北台灣的高科技產業重鎮,而人口也在近幾年急速增加。本縣方言於絕大部分地區使用海陸客家話,竹北市及新豐鄉沿海地區部分使用泉州腔閩南話較多,關西鎮及峨眉鄉部分使用四縣腔客家話為主。
```
output(answer):
```
四縣腔客家話
```
### Objective
- Fine-tune some pre-trained model:[bert-base-chinese](https://huggingface.co/bert-base-chinese), [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) to pass the baseline.
```
Baseline:accuracy score > 0.79
```
### Experiments
Compare between BERT-base and RoBERTa.
The models bert-base-chinese and hf1/Chinese-roberta-wwm-ext are built on the BERT and RoBERTa architectures, respectively.
Notably, hf1/Chinese-roberta-wwm-ext is based on the RoBERTa framework and boasts a larger model size, with 355 million parameters, in contrast to the 110 million parameters of bert-base-chinese.
During training, hf1/Chinese-roberta-wwm-ext-large utilized a more extensive and diverse set of articles, including web pages, news articles, and social media content.
|
temporary0-0name/run_3
|
temporary0-0name
| 2023-11-13T08:28:10Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-13T07:47:21Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: run_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.8139 | 0.07 | 50 | 7.3922 |
| 7.3173 | 0.14 | 100 | 7.2946 |
| 7.2587 | 0.21 | 150 | 7.2339 |
| 7.2122 | 0.27 | 200 | 7.2167 |
| 7.1908 | 0.34 | 250 | 7.1945 |
| 7.171 | 0.41 | 300 | 7.1875 |
| 7.2054 | 0.48 | 350 | 7.1893 |
| 7.1899 | 0.55 | 400 | 7.1889 |
| 7.1839 | 0.62 | 450 | 7.1801 |
| 7.1571 | 0.69 | 500 | 7.1759 |
| 7.1577 | 0.75 | 550 | 7.1725 |
| 7.1799 | 0.82 | 600 | 7.1757 |
| 7.1698 | 0.89 | 650 | 7.1715 |
| 7.1705 | 0.96 | 700 | 7.1651 |
| 7.1712 | 1.03 | 750 | 7.1677 |
| 7.1418 | 1.1 | 800 | 7.1699 |
| 7.1692 | 1.17 | 850 | 7.1659 |
| 7.1376 | 1.24 | 900 | 7.1656 |
| 7.1703 | 1.3 | 950 | 7.1643 |
| 7.1534 | 1.37 | 1000 | 7.1676 |
| 7.1445 | 1.44 | 1050 | 7.1607 |
| 7.1552 | 1.51 | 1100 | 7.1596 |
| 7.1475 | 1.58 | 1150 | 7.1599 |
| 7.1401 | 1.65 | 1200 | 7.1593 |
| 7.161 | 1.72 | 1250 | 7.1606 |
| 7.1513 | 1.78 | 1300 | 7.1564 |
| 7.1465 | 1.85 | 1350 | 7.1548 |
| 7.1603 | 1.92 | 1400 | 7.1529 |
| 7.1203 | 1.99 | 1450 | 7.1533 |
| 7.1308 | 2.06 | 1500 | 7.1546 |
| 7.1244 | 2.13 | 1550 | 7.1546 |
| 7.1437 | 2.2 | 1600 | 7.1561 |
| 7.1618 | 2.26 | 1650 | 7.1517 |
| 7.1502 | 2.33 | 1700 | 7.1519 |
| 7.146 | 2.4 | 1750 | 7.1514 |
| 7.1088 | 2.47 | 1800 | 7.1520 |
| 7.1335 | 2.54 | 1850 | 7.1483 |
| 7.1388 | 2.61 | 1900 | 7.1472 |
| 7.1502 | 2.68 | 1950 | 7.1470 |
| 7.1511 | 2.75 | 2000 | 7.1479 |
| 7.1288 | 2.81 | 2050 | 7.1506 |
| 7.1416 | 2.88 | 2100 | 7.1488 |
| 7.1568 | 2.95 | 2150 | 7.1512 |
| 7.133 | 3.02 | 2200 | 7.1497 |
| 7.1178 | 3.09 | 2250 | 7.1501 |
| 7.1482 | 3.16 | 2300 | 7.1506 |
| 7.1242 | 3.23 | 2350 | 7.1504 |
| 7.1181 | 3.29 | 2400 | 7.1497 |
| 7.1133 | 3.36 | 2450 | 7.1495 |
| 7.1199 | 3.43 | 2500 | 7.1468 |
| 7.146 | 3.5 | 2550 | 7.1467 |
| 7.1284 | 3.57 | 2600 | 7.1455 |
| 7.1356 | 3.64 | 2650 | 7.1464 |
| 7.1372 | 3.71 | 2700 | 7.1445 |
| 7.1307 | 3.77 | 2750 | 7.1429 |
| 7.1407 | 3.84 | 2800 | 7.1427 |
| 7.126 | 3.91 | 2850 | 7.1426 |
| 7.1288 | 3.98 | 2900 | 7.1425 |
| 7.1223 | 4.05 | 2950 | 7.1428 |
| 7.1169 | 4.12 | 3000 | 7.1429 |
| 7.139 | 4.19 | 3050 | 7.1441 |
| 7.1231 | 4.26 | 3100 | 7.1433 |
| 7.1114 | 4.32 | 3150 | 7.1429 |
| 7.1204 | 4.39 | 3200 | 7.1429 |
| 7.0994 | 4.46 | 3250 | 7.1430 |
| 7.1039 | 4.53 | 3300 | 7.1434 |
| 7.1489 | 4.6 | 3350 | 7.1428 |
| 7.1315 | 4.67 | 3400 | 7.1426 |
| 7.1173 | 4.74 | 3450 | 7.1426 |
| 7.1241 | 4.8 | 3500 | 7.1428 |
| 7.1001 | 4.87 | 3550 | 7.1427 |
| 7.137 | 4.94 | 3600 | 7.1422 |
### Framework versions
- Transformers 4.33.1
- Pytorch 1.12.1
- Datasets 2.14.6
- Tokenizers 0.13.3
|
sekinat/ppo-huggy
|
sekinat
| 2023-11-13T08:19:39Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-11-13T08:19:24Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sekinat/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Cainiao-AI/G2PTL
|
Cainiao-AI
| 2023-11-13T08:17:39Z | 111 | 20 |
transformers
|
[
"transformers",
"pytorch",
"G2PTL",
"feature-extraction",
"custom_code",
"zh",
"arxiv:2304.01559",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2023-04-12T07:40:31Z |
---
language: zh
license: apache-2.0
---
# G2PTL-1
## Introduction
G2PTL-1: A Geography-Graph Pre-trained model for address.
This work is the first version of G2PTL (v1.0)
## Other work
We also provide an integrated system for text-based address analysis in logistics field, named **TAAS**, which supports several address perception tasks such as Address Standardization, Address Completion, as well as other logistics related tasks such as Geo-locating From Text to Geospatial and so on. TAAS is available at https://huggingface.co/Cainiao-AI/TAAS.
## Model description
G2PTL is a Transformer model that is pretrained on a large corpus of Chinese addresses in a self-supervised manner. It has three pretraining objectives:
- Masked language modeling (MLM): taking an address, the model randomly masks some words in the input text and predicts the masked words. It should be noted that for the geographical entities in the address, we adopt the Whole Word Masking (WWM) approach to mask them and learn the co-occurrence relationships among them.
- Hierarchical text modeling (HTC): an address is a text with a hierarchical structure of province, city, district, and street. HTC is used to model the hierarchical relationship among these levels in addresses.

- Geocoding (GC): an address can be represented by a point with latitude and longitude in the real world. The GC task is designed to learn the mapping relationship between address text and geographical location.
More detail: https://arxiv.org/abs/2304.01559

## Intended uses & limitations
This model is designed for decision tasks based on address text, including tasks related to understanding address texts and Spatial-Temporal downstream tasks which rely on address text representation.
1. Address text understanding tasks
- Geocoding
- Named Entity Recognition
- Geographic Entity Alignment
- Address Text Similarity
- Address Text Classification
- ...
2. Spatial-Temporal downstream tasks:
- Estimated Time of Arrival (ETA) Prediction
- Pick-up & Delivery Route Prediction.
- Express Volume Prediction
- ...
The model currently only supports Chinese addresses, and it is an encoder-only model which is not suitable for text generation scenarios such as question answering. If you need to use address text based dialogue capabilities, you can look forward to our second version of G2PTL (v2.0)
## How to use
You can use this model directly with a pipeline for masked language modeling:
```Python
>>> from transformers import pipeline, AutoModel, AutoTokenizer
>>> model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
>>> tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
>>> mask_filler = pipeline(task= 'fill-mask', model= model,tokenizer = tokenizer)
>>> mask_filler("浙江省杭州市[MASK]杭区五常街道阿里巴巴西溪园区")
```
```json
[{'score': 1.0,
'token': 562,
'token_str': '余',
'sequence': '浙 江 省 杭 州 市 余 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'},
{'score': 7.49648343401077e-09,
'token': 1852,
'token_str': '杭',
'sequence': '浙 江 省 杭 州 市 杭 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'},
{'score': 5.823675763849678e-09,
'token': 213,
'token_str': '西',
'sequence': '浙 江 省 杭 州 市 西 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'},
{'score': 3.383779922927488e-09,
'token': 346,
'token_str': '五',
'sequence': '浙 江 省 杭 州 市 五 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'},
{'score': 2.9116642430437878e-09,
'token': 2268,
'token_str': '荆',
'sequence': '浙 江 省 杭 州 市 荆 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}]
```
You can also use this model for multiple [MASK] filling in PyTorch:
```python
from transformers import pipeline, AutoModel, AutoTokenizer
import torch
model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
model.eval()
text = ['浙江省杭州市[MASK][MASK][MASK]五常街道阿里巴巴西溪园区']
encoded_input = tokenizer(text, return_tensors='pt')
outputs = model(**encoded_input)
prediction_scores = outputs.logits
prediction_scores = torch.argmax(prediction_scores, dim=-1)
prediction_scores = prediction_scores.cpu().detach().numpy()
input_ids = encoded_input['input_ids']
print('G2PTL:', tokenizer.decode(prediction_scores[torch.where(input_ids.cpu()>0)][1:-1]))
```
```json
G2PTL: 浙 江 省 杭 州 市 余 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区
```
Here is how to use this model to get the HTC output of a given text in PyTorch:
```python
from transformers import pipeline, AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
model.eval()
text = "浙江省杭州市五常街道阿里巴巴西溪园区"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
htc_layer_out = output.htc_layer_out
htc_pred = model.get_htc_code(htc_layer_out)
print('HTC Result: ', model.decode_htc_code_2_chn(htc_pred))
```
```json
HTC Result: ['浙江省杭州市余杭区五常街道', '浙江省杭州市五常街道']
```
Here is how to use this model to get the features/embeddings of a given text in PyTorch:
```python
from transformers import pipeline, AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
model.eval()
text = "浙江省杭州市余杭区五常街道阿里巴巴西溪园区"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
final_hidden_state = output.final_hidden_state
```
Here is how to use this model to get cosine similarity between two address texts in PyTorch:
```python
from transformers import pipeline, AutoModel, AutoTokenizer
import torch
model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True)
model.eval()
text = ["浙江省杭州市余杭区五常街道阿里巴巴西溪园区", "浙江省杭州市阿里巴巴西溪园区"]
encoded_input = tokenizer(text, return_tensors='pt', padding=True)
output = model(**encoded_input)
final_pooler_output = output.final_pooler_output
cos_sim = torch.cosine_similarity(final_pooler_output[0], final_pooler_output[1])
print('Cosin Similarity: ', cos_sim[0].detach().numpy())
```
```json
Cosin Similarity: 0.8974346
```
## Training loss




## Requirements
python>=3.8
```shell
tqdm==4.65.0
torch==1.13.1
transformers==4.27.4
datasets==2.11.0
fairseq==0.12.2
```
## Citation
```bibtex
@misc{wu2023g2ptl,
title={G2PTL: A Pre-trained Model for Delivery Address and its Applications in Logistics System},
author={Lixia Wu and Jianlin Liu and Junhong Lou and Haoyuan Hu and Jianbin Zheng and Haomin Wen and Chao Song and Shu He},
year={2023},
eprint={2304.01559},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|
darshsingh1/sqlcoder2-fasttrain-7k
|
darshsingh1
| 2023-11-13T08:15:51Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:mpachauri/DatasetTrimmed",
"base_model:NousResearch/Llama-2-7b-hf",
"base_model:finetune:NousResearch/Llama-2-7b-hf",
"region:us"
] | null | 2023-11-10T15:41:42Z |
---
base_model: NousResearch/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: sqlcoder2-fasttrain-7k
results: []
datasets:
- mpachauri/DatasetTrimmed
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sqlcoder2-fasttrain-7k
This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.5
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
soongbren/distilbert-base-multilingual-cased-sentiments-student-small-dataset
|
soongbren
| 2023-11-13T08:11:37Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:lxyuan/distilbert-base-multilingual-cased-sentiments-student",
"base_model:finetune:lxyuan/distilbert-base-multilingual-cased-sentiments-student",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-07T08:42:15Z |
---
license: apache-2.0
base_model: lxyuan/distilbert-base-multilingual-cased-sentiments-student
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-multilingual-cased-sentiments-student-small-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sentiments-student-small-dataset
This model is a fine-tuned version of [lxyuan/distilbert-base-multilingual-cased-sentiments-student](https://huggingface.co/lxyuan/distilbert-base-multilingual-cased-sentiments-student) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6234
- eval_accuracy: {'accuracy': 0.7055630936227951}
- eval_f1score: {'f1': 0.695644141066491}
- eval_runtime: 12.2657
- eval_samples_per_second: 60.086
- eval_steps_per_second: 7.582
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 515
- num_epochs: 7
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nikxtaco/ppo-SoccerTwos
|
nikxtaco
| 2023-11-13T08:08:19Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2023-11-13T08:08:15Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nikxtaco/ppo-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
scy0208/whisper-aviation
|
scy0208
| 2023-11-13T08:04:07Z | 24 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-13T03:01:09Z |
---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-aviation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-aviation
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0058
- Wer: 42.5926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0002 | 166.67 | 1000 | 0.9191 | 43.0041 |
| 0.0001 | 333.33 | 2000 | 0.9722 | 41.9753 |
| 0.0 | 500.0 | 3000 | 0.9963 | 41.7695 |
| 0.0 | 666.67 | 4000 | 1.0058 | 42.5926 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
sally9805/saved_model_prior_preserving
|
sally9805
| 2023-11-13T08:01:42Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T07:37:06Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - sally9805/saved_model_prior_preserving
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
lmdeploy/llama2-chat-70b-4bit
|
lmdeploy
| 2023-11-13T07:54:54Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-23T16:06:32Z |
---
license: llama2
pipeline_tag: text-generation
tags:
- text-generation-inference
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ccdc322e592905f922a06e/VhwQtaklohkUXFWkjA-3M.png" width="450"/>
English | [简体中文](README_zh-CN.md)
</div>
<p align="center">
👋 join us on <a href="https://twitter.com/intern_lm" target="_blank">Twitter</a>, <a href="https://discord.gg/xa29JuW87d" target="_blank">Discord</a> and <a href="https://r.vansin.top/?r=internwx" target="_blank">WeChat</a>
</p>
# W4A16 LLM Model Deployment
LMDeploy supports LLM model inference of 4-bit weight, with the minimum requirement for NVIDIA graphics cards being sm80.
Before proceeding with the inference, please ensure that lmdeploy(>=v0.0.14) is installed.
```shell
pip install 'lmdeploy>=0.0.14'
```
## 4-bit LLM model Inference
You can download the pre-quantized 4-bit weight models from LMDeploy's [model zoo](https://huggingface.co/lmdeploy) and conduct inference using the following command.
Alternatively, you can quantize 16-bit weights to 4-bit weights following the ["4-bit Weight Quantization"](#4-bit-weight-quantization) section, and then perform inference as per the below instructions.
Take the 4-bit Llama-2-70B model from the model zoo as an example:
```shell
git-lfs install
git clone https://huggingface.co/lmdeploy/llama2-chat-70b-4bit
```
As demonstrated in the command below, first convert the model's layout using `turbomind.deploy`, and then you can interact with the AI assistant in the terminal
```shell
## Convert the model's layout and store it in the default path, ./workspace.
lmdeploy convert \
--model-name llama2 \
--model-path ./llama2-chat-70b-w4 \
--model-format awq \
--group-size 128
## inference
lmdeploy chat ./workspace
```
## Serve with gradio
If you wish to interact with the model via web ui, please initiate the gradio server as indicated below:
```shell
lmdeploy serve gradio ./workspace --server_name {ip_addr} --server_port {port}
```
Subsequently, you can open the website `http://{ip_addr}:{port}` in your browser and interact with the model
## Inference Performance
We benchmarked the Llama 2 7B and 13B with 4-bit quantization on NVIDIA GeForce RTX 4090 using [profile_generation.py](https://github.com/InternLM/lmdeploy/blob/main/benchmark/profile_generation.py). And we measure the token generation throughput (tokens/s) by setting a single prompt token and generating 512 tokens. All the results are measured for single batch inference.
| model | llm-awq | mlc-llm | turbomind |
| ----------- | ------- | ------- | --------- |
| Llama 2 7B | 112.9 | 159.4 | 206.4 |
| Llama 2 13B | N/A | 90.7 | 115.8 |
```shell
pip install nvidia-ml-py
```
```bash
python profile_generation.py \
--model-path /path/to/your/model \
--concurrency 1 8 --prompt-tokens 0 512 --completion-tokens 2048 512
```
## 4-bit Weight Quantization
It includes two steps:
- generate quantization parameter
- quantize model according to the parameter
### Step 1: Generate Quantization Parameter
```shell
lmdeploy lite calibrate \
--model $HF_MODEL \
--calib_dataset 'c4' \ # Calibration dataset, supports c4, ptb, wikitext2, pileval
--calib_samples 128 \ # Number of samples in the calibration set, if memory is insufficient, you can appropriately reduce this
--calib_seqlen 2048 \ # Length of a single piece of text, if memory is insufficient, you can appropriately reduce this
--work_dir $WORK_DIR \ # Folder storing Pytorch format quantization statistics parameters and post-quantization weight
```
### Step2: Quantize Weights
LMDeploy employs AWQ algorithm for model weight quantization.
```shell
lmdeploy lite auto_awq \
--model $HF_MODEL \
--w_bits 4 \ # Bit number for weight quantization
--w_sym False \ # Whether to use symmetric quantization for weights
--w_group_size 128 \ # Group size for weight quantization statistics
--work_dir $WORK_DIR \ # Directory saving quantization parameters from Step 1
```
After the quantization is complete, the quantized model is saved to `$WORK_DIR`. Then you can proceed with model inference according to the instructions in the ["4-Bit Weight Model Inference"](#4-bit-llm-model-inference) section.
|
rizkyjun/bloom-7b-finetuned-aings-adapters-2
|
rizkyjun
| 2023-11-13T07:48:08Z | 12 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2023-11-12T15:28:14Z |
---
library_name: peft
base_model: bigscience/bloom-7b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
|
rizkyjun/bloom-7b-finetuned-aings-adapters-3
|
rizkyjun
| 2023-11-13T07:47:14Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"region:us"
] | null | 2023-11-12T15:26:09Z |
---
library_name: peft
base_model: bigscience/bloom-7b1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.2.dev0
|
makwingchi/ppo_part1
|
makwingchi
| 2023-11-13T07:33:20Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T07:33:16Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -123.86 +/- 71.79
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'makwingchi/ppo_part1'
'batch_size': 512
'minibatch_size': 128}
```
|
rashid0784/mistral-better
|
rashid0784
| 2023-11-13T07:32:21Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2023-11-13T07:32:15Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.1
|
styleai/zakaria
|
styleai
| 2023-11-13T07:19:22Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stablediffusionapi/realistic-vision-51",
"base_model:adapter:stablediffusionapi/realistic-vision-51",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-13T04:45:25Z |
---
license: creativeml-openrail-m
base_model: stablediffusionapi/realistic-vision-51
instance_prompt: hta, photograph, man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - styleai/zakaria
These are LoRA adaption weights for stablediffusionapi/realistic-vision-51. The weights were trained on hta, photograph, man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
guishe/span-marker-bge-base-en-v1.5-fewnerd-fine-super
|
guishe
| 2023-11-13T07:19:16Z | 4 | 1 |
span-marker
|
[
"span-marker",
"pytorch",
"token-classification",
"ner",
"named-entity-recognition",
"generated_from_span_marker_trainer",
"en",
"dataset:DFKI-SLT/few-nerd",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] |
token-classification
| 2023-11-09T17:13:27Z |
---
language:
- en
license: cc-by-sa-4.0
library_name: span-marker
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
- generated_from_span_marker_trainer
datasets:
- DFKI-SLT/few-nerd
metrics:
- precision
- recall
- f1
widget:
- text: The WPC led the international peace movement in the decade after the Second
World War, but its failure to speak out against the Soviet suppression of the
1956 Hungarian uprising and the resumption of Soviet nuclear tests in 1961 marginalised
it, and in the 1960s it was eclipsed by the newer, non-aligned peace organizations
like the Campaign for Nuclear Disarmament.
- text: Most of the Steven Seagal movie "Under Siege "(co-starring Tommy Lee Jones)
was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and
open to the public.
- text: 'The Central African CFA franc (French: "franc CFA "or simply "franc ", ISO
4217 code: XAF) is the currency of six independent states in Central Africa: Cameroon,
Central African Republic, Chad, Republic of the Congo, Equatorial Guinea and Gabon.'
- text: Brenner conducted post-doctoral research at Brandeis University with Gregory
Petsko and then took his first academic position at Thomas Jefferson University
in 1996, moving to Dartmouth Medical School in 2003, where he served as Associate
Director for Basic Sciences at Norris Cotton Cancer Center.
- text: On Friday, October 27, 2017, the Senate of Spain (Senado) voted 214 to 47
to invoke Article 155 of the Spanish Constitution over Catalonia after the Catalan
Parliament declared the independence.
pipeline_tag: token-classification
base_model: BAAI/bge-base-en-v1.5
model-index:
- name: SpanMarker with BAAI/bge-base-en-v1.5 on FewNERD
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: FewNERD
type: DFKI-SLT/few-nerd
split: eval
metrics:
- type: f1
value: 0.6726393599802055
name: F1
- type: precision
value: 0.6740082644628099
name: Precision
- type: recall
value: 0.6712760046916476
name: Recall
---
# SpanMarker with BAAI/bge-base-en-v1.5 on FewNERD
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. This SpanMarker model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the underlying encoder.
## Model Details
### Model Description
- **Model Type:** SpanMarker
- **Encoder:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Maximum Sequence Length:** 256 tokens
- **Maximum Entity Length:** 8 words
- **Training Dataset:** [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd)
- **Language:** en
- **License:** cc-by-sa-4.0
### Model Sources
- **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
- **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
### Model Labels
| Label | Examples |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------|
| art-broadcastprogram | "Corazones", "The Gale Storm Show : Oh , Susanna", "Street Cents" |
| art-film | "L'Atlantide", "Bosch", "Shawshank Redemption" |
| art-music | "Atkinson , Danko and Ford ( with Brockie and Hilton )", "Hollywood Studio Symphony", "Champion Lover" |
| art-other | "Venus de Milo", "The Today Show", "Aphrodite of Milos" |
| art-painting | "Cofiwch Dryweryn", "Touit", "Production/Reproduction" |
| art-writtenart | "Time", "The Seven Year Itch", "Imelda de ' Lambertazzi" |
| building-airport | "Newark Liberty International Airport", "Luton Airport", "Sheremetyevo International Airport" |
| building-hospital | "Hokkaido University Hospital", "Memorial Sloan-Kettering Cancer Center", "Yeungnam University Hospital" |
| building-hotel | "Radisson Blu Sea Plaza Hotel", "Flamingo Hotel", "The Standard Hotel" |
| building-library | "Bayerische Staatsbibliothek", "British Library", "Berlin State Library" |
| building-other | "Communiplex", "Alpha Recording Studios", "Henry Ford Museum" |
| building-restaurant | "Carnegie Deli", "Fatburger", "Trumbull" |
| building-sportsfacility | "Boston Garden", "Glenn Warner Soccer Facility", "Sports Center" |
| building-theater | "Pittsburgh Civic Light Opera", "National Paris Opera", "Sanders Theatre" |
| event-attack/battle/war/militaryconflict | "Jurist", "Vietnam War", "Easter Offensive" |
| event-disaster | "1693 Sicily earthquake", "the 1912 North Mount Lyell Disaster", "1990s North Korean famine" |
| event-election | "Elections to the European Parliament", "March 1898 elections", "1982 Mitcham and Morden by-election" |
| event-other | "Eastwood Scoring Stage", "Masaryk Democratic Movement", "Union for a Popular Movement" |
| event-protest | "French Revolution", "Iranian Constitutional Revolution", "Russian Revolution" |
| event-sportsevent | "Stanley Cup", "National Champions", "World Cup" |
| location-GPE | "Croatian", "the Republic of Croatia", "Mediterranean Basin" |
| location-bodiesofwater | "Norfolk coast", "Atatürk Dam Lake", "Arthur Kill" |
| location-island | "Staten Island", "Laccadives", "new Samsat district" |
| location-mountain | "Ruweisat Ridge", "Salamander Glacier", "Miteirya Ridge" |
| location-other | "Victoria line", "Northern City Line", "Cartuther" |
| location-park | "Shenandoah National Park", "Gramercy Park", "Painted Desert Community Complex Historic District" |
| location-road/railway/highway/transit | "NJT", "Friern Barnet Road", "Newark-Elizabeth Rail Link" |
| organization-company | "Texas Chicken", "Dixy Chicken", "Church 's Chicken" |
| organization-education | "Barnard College", "MIT", "Belfast Royal Academy and the Ulster College of Physical Education" |
| organization-government/governmentagency | "Diet", "Congregazione dei Nobili", "Supreme Court" |
| organization-media/newspaper | "Clash", "TimeOut Melbourne", "Al Jazeera" |
| organization-other | "Defence Sector C", "IAEA", "4th Army" |
| organization-politicalparty | "Al Wafa ' Islamic", "Kenseitō", "Shimpotō" |
| organization-religion | "Christian", "Jewish", "UPCUSA" |
| organization-showorganization | "Lizzy", "Mr. Mister", "Bochumer Symphoniker" |
| organization-sportsleague | "First Division", "China League One", "NHL" |
| organization-sportsteam | "Arsenal", "Tottenham", "Luc Alphand Aventures" |
| other-astronomything | "Algol", "`` Caput Larvae ''", "Zodiac" |
| other-award | "Grand Commander of the Order of the Niger", "Order of the Republic of Guinea and Nigeria", "GCON" |
| other-biologything | "BAR", "N-terminal lipid", "Amphiphysin" |
| other-chemicalthing | "uranium", "carbon dioxide", "sulfur" |
| other-currency | "lac crore", "$", "Travancore Rupee" |
| other-disease | "French Dysentery Epidemic of 1779", "hypothyroidism", "bladder cancer" |
| other-educationaldegree | "Bachelor", "Master", "BSc ( Hons ) in physics" |
| other-god | "El", "Fujin", "Raijin" |
| other-language | "English", "Latin", "Breton-speaking" |
| other-law | "Thirty Years ' Peace", "Leahy–Smith America Invents Act ( AIA", "United States Freedom Support Act" |
| other-livingthing | "monkeys", "insects", "patchouli" |
| other-medical | "amitriptyline", "Pediatrics", "pediatrician" |
| person-actor | "Ellaline Terriss", "Edmund Payne", "Tchéky Karyo" |
| person-artist/author | "Hicks", "George Axelrod", "Gaetano Donizett" |
| person-athlete | "Jaguar", "Tozawa", "Neville" |
| person-director | "Bob Swaim", "Richard Quine", "Frank Darabont" |
| person-other | "Richard Benson", "Holden", "Campbell" |
| person-politician | "Emeric", "William", "Rivière" |
| person-scholar | "Stalmine", "Wurdack", "Stedman" |
| person-soldier | "Krukenberg", "Joachim Ziegler", "Helmuth Weidling" |
| product-airplane | "EC135T2 CPDS", "Spey-equipped FGR.2s", "Luton" |
| product-car | "100EX", "Phantom", "Corvettes - GT1 C6R" |
| product-food | "red grape", "V. labrusca", "yakiniku" |
| product-game | "Splinter Cell", "Hardcore RPG", "Airforce Delta" |
| product-other | "X11", "PDP-1", "Fairbottom Bobs" |
| product-ship | "HMS `` Chinkara ''", "Essex", "Congress" |
| product-software | "Wikipedia", "AmiPDF", "Apdf" |
| product-train | "55022", "Royal Scots Grey", "High Speed Trains" |
| product-weapon | "ZU-23-2M Wróbel", "ZU-23-2MR Wróbel II", "AR-15 's" |
## Uses
### Direct Use for Inference
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("guishe/span-marker-bge-base-en-v1.5-fewnerd-fine-super")
# Run inference
entities = model.predict("Most of the Steven Seagal movie \"Under Siege \"(co-starring Tommy Lee Jones) was filmed on the, which is docked on Mobile Bay at Battleship Memorial Park and open to the public.")
```
### Downstream Use
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
```python
from span_marker import SpanMarkerModel, Trainer
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("guishe/span-marker-bge-base-en-v1.5-fewnerd-fine-super")
# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003
# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
model=model,
train_dataset=dataset["train"],
eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("guishe/span-marker-bge-base-en-v1.5-fewnerd-fine-super-finetuned")
```
</details>
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:----------------------|:----|:--------|:----|
| Sentence length | 1 | 24.4945 | 267 |
| Entities per sentence | 0 | 2.5832 | 88 |
### Training Hyperparameters
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training Results
| Epoch | Step | Validation Loss | Validation Precision | Validation Recall | Validation F1 | Validation Accuracy |
|:------:|:-----:|:---------------:|:--------------------:|:-----------------:|:-------------:|:-------------------:|
| 0.5964 | 3000 | 0.0324 | 0.6263 | 0.5826 | 0.6037 | 0.8981 |
| 1.1928 | 6000 | 0.0278 | 0.6620 | 0.6499 | 0.6559 | 0.9132 |
| 1.7893 | 9000 | 0.0264 | 0.6719 | 0.6614 | 0.6666 | 0.9159 |
| 2.3857 | 12000 | 0.0260 | 0.6724 | 0.6703 | 0.6714 | 0.9174 |
| 2.9821 | 15000 | 0.0258 | 0.6740 | 0.6713 | 0.6726 | 0.9177 |
### Framework Versions
- Python: 3.10.8
- SpanMarker: 1.4.0
- Transformers: 4.28.0
- PyTorch: 1.13.1+cu117
- Datasets: 2.14.4
- Tokenizers: 0.13.3
## Citation
### BibTeX
```
@software{Aarsen_SpanMarker,
author = {Aarsen, Tom},
license = {Apache-2.0},
title = {{SpanMarker for Named Entity Recognition}},
url = {https://github.com/tomaarsen/SpanMarkerNER}
}
```
|
Harishr15/my-pet-cat-hhr
|
Harishr15
| 2023-11-13T07:16:59Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T07:10:37Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-cat-hhr Dreambooth model trained by Harishr15 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: JJCET-688
Sample pictures of this concept:

|
sreejith8100/donut-base-death
|
sreejith8100
| 2023-11-13T07:11:44Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-11-13T06:29:31Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-death
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-death
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8331 | 1.0 | 70 | 0.3877 |
| 0.2835 | 2.0 | 140 | 0.2715 |
| 0.1761 | 3.0 | 210 | 0.2680 |
| 0.095 | 4.0 | 280 | 0.2797 |
| 0.1615 | 5.0 | 350 | 0.3204 |
| 0.0158 | 6.0 | 420 | 0.3483 |
| 0.0059 | 7.0 | 490 | 0.3216 |
| 0.04 | 8.0 | 560 | 0.3296 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
tree12344/detr-resnet-50_finetuned_cppe5
|
tree12344
| 2023-11-13T07:06:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-11-12T23:22:55Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
zgce/Yi-34b-200K-alpaca-rpv3-scipy-6bpw-hb6-exl2
|
zgce
| 2023-11-13T07:03:48Z | 16 | 0 |
transformers
|
[
"transformers",
"Yi",
"text-generation",
"custom_code",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-13T03:32:24Z |
---
license: mit
---
This is exl2 format model.
### Yi-34b-200K-alpaca-rpv3-scipy-6bpw-hb6-exl2
- base model: [Yi-34B-200K](https://huggingface.co/01-ai/Yi-34B-200K)
- LoRA: [Yi-34b-alpaca-cot-lora](https://huggingface.co/zzlgreat/Yi-34b-alpaca-cot-lora)
- LoRA: [limarpv3-yi-llama-34b-lora](https://huggingface.co/Doctor-Shotgun/limarpv3-yi-llama-34b-lora)
- LoRA: [Yi-34B-Spicyboros-3.1-LoRA](https://huggingface.co/LoneStriker/Yi-34B-Spicyboros-3.1-LoRA)
### description
- This is test for [exllamav2](https://github.com/turboderp/exllamav2) model version must after [Add Yi support](https://github.com/turboderp/exllamav2/commit/6d24e1ad40d89f64b1bd3ae36e639c74c9f730b2)
- 6.0bpw `python convert.py -i Yi-34b-200K-alpaca-rpv3-scipy -c exl2/0000.parquet -o Yi-34b-200K-alpaca-rpv3-scipy-4bpw-hb6-exl2 -hb 6 -l 4096 -b 6`
- [convert doc](https://github.com/turboderp/exllamav2/blob/master/doc/convert.md)
- calibration dataset: [WikiText-2-v1](https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet)
- oobabooga/text-generation-webui must add `--trust-remote-code` into CMD_FLAGS.txt and use ExLlamav2_HF to load model
|
Gangothri-123/elephant
|
Gangothri-123
| 2023-11-13T06:47:41Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-11-13T06:42:42Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Elephant Dreambooth model trained by Gangothri-123 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: PIETW-298
Sample pictures of this concept:
.jpg)
|
emya/vicuna-7b-v1.5-steve-jobs-8bit-v1
|
emya
| 2023-11-13T06:43:20Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-11-05T22:33:21Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
iambestfeed/phobert_finetune_biencoder
|
iambestfeed
| 2023-11-13T06:34:05Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-11-13T06:30:49Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 8772 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
stephen423/gpt2-wikitext2
|
stephen423
| 2023-11-13T06:24:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:wikitext",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-09T13:35:36Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- wikitext
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the wikitext dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5489 | 1.0 | 2250 | 6.4641 |
| 6.148 | 2.0 | 4500 | 6.1911 |
| 6.0043 | 3.0 | 6750 | 6.1045 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.2
|
nikxtaco/ppo-SnowballTarget
|
nikxtaco
| 2023-11-13T06:22:50Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-11-13T06:22:43Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nikxtaco/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
malathi-5/tiger
|
malathi-5
| 2023-11-13T06:03:20Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-13T06:00:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Tiger Dreambooth model trained by malathi-5 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: PIETW-227
Sample pictures of this concept:
.jpg)
|
hollowstrawberry/holotard
|
hollowstrawberry
| 2023-11-13T05:49:09Z | 0 | 131 | null |
[
"stable-diffusion",
"vtuber",
"hololive",
"stable diffusion 1.5",
"textual-inversion",
"lora",
"character",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-02-18T08:10:08Z |
---
license: creativeml-openrail-m
language:
- en
pretty_name: Hololive vtuber LoRAs and TIs
task_categories:
- question-answering
tags:
- stable-diffusion
- vtuber
- hololive
- stable diffusion 1.5
- textual-inversion
- lora
- character
- text-to-image
pipeline_tag: text-to-image
metrics:
- character
---
<img src="https://huggingface.co/hollowstrawberry/holotard/resolve/main/out.jpg" width="800"/>
# Preamble
These resources are intended to be used with [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/). If you don't know what it is or how to use it effectively, [here's an extensive guide about it](https://huggingface.co/hollowstrawberry/stable-diffusion-guide/blob/main/README.md#index).
Recommended additional resources:
* [blessed2.vae.pt](https://huggingface.co/NoCrypt/blessed_vae/blob/main/blessed2.vae.pt) - without a VAE selected in the settings your colors will look faded.
* [TagComplete](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete) to view all available anime tags as you write your prompt.
* [Image Browser (updated fork)](https://github.com/AlUlkesh/stable-diffusion-webui-images-browser) to view your past images and their prompt metadata.
* [Lycoris extension](https://github.com/KohakuBlueleaf/a1111-sd-webui-locon) to be able to use locons and lohas, which are new types of lora.
* [EasyNegative](https://huggingface.co/datasets/gsdf/EasyNegative/blob/main/EasyNegative.safetensors) - put it in your `stable-diffusion-webui/embeddings` folder and write `EasyNegative` in your **negative** prompt to drastically improve your images.
I make Loras of my own, particularly multi-outfit or multi-character Loras. Many of the Loras found in this repo are made by me or my peers. [Here are some other Loras I made](https://huggingface.co/hollowstrawberry/multicharloras).
# Models
I merged these with hll4-p3-ep8, which is the newest vtuber-finetuned model. They have blessed2 VAE baked in. Many more mixes can be found [here](https://huggingface.co/grugger/chubas).
* [HeavenOrangeVtubers_hll4_final](https://huggingface.co/hollowstrawberry/holotard/blob/main/models/HeavenOrangeVtubers_hll4_final.safetensors)
* [AOM3_hll4_final](https://huggingface.co/hollowstrawberry/holotard/blob/main/models/AOM3_hll4_final.safetensors)
* [AOM2hard_hll4_final](https://huggingface.co/hollowstrawberry/holotard/blob/main/models/AOM2hard_hll4_final.safetensors)
* [Grapefruit4.1_hll4_final](https://huggingface.co/hollowstrawberry/holotard/blob/main/models/Grapefruit4.1_hll4_final.safetensors)
<details>
<summary>Click here for a comparison</summary>

</details>
# Loras
Most useful holo Loras will be linked to and backed up here. Put them in your `stable-diffusion-webui/models/Lora` folder. Use them in your prompt like this: `<lora:filename:1>`
Most Loras work well with a weight of `1.0`, but some older ones work best at `0.7`.
Learn to make your own Loras [with my guide](https://civitai.com/models/22530).
## Multi Outfit Loras
These are the most modern and have several outfits of each talent in a single Lora, or the main outfit if no other options are available. Check the model page / text files for examples.
### HoloEN
* [Mori Calliope ×9](https://civitai.com/models/173344)
* [Takanashi Kiara ×3](https://civitai.com/models/48195)
* [Ninomae Ina'nis ×5](https://civitai.com/models/17922)
* [Gawr Gura ×6](https://civitai.com/models/20447)
* [Amelia Watson ×4](https://civitai.com/models/27398)
* [Irys x4](https://civitai.com/models/111029)
* [Ceres Fauna ×3](https://civitai.com/models/97377)
* [Ouro Kronii ×3](https://civitai.com/models/124507)
* [Nanashi Mumei ×4](https://civitai.com/models/124549)
* [Hakos Baelz ×5](https://civitai.com/models/6080)
* [Tsukumo Sana ×1](https://civitai.com/models/20175)
* [Shiori Novella x1](https://civitai.com/models/116558)
* [Koseki Bijou x1](https://civitai.com/models/129972)
* [Nerissa Ravencroft x1](https://civitai.com/models/141707)
* [Fuwawa Abyssgard x1](https://civitai.com/models/132928)
* [Mococo Abyssgard x1](https://civitai.com/models/132419)
### HoloID
* [Ayunda Risu ×2](https://civitai.com/models/21209)
* [Moona Hoshinova ×1](https://civitai.com/models/124535)
* [Airani Iofifteen ×1](https://civitai.com/models/27558)
* [Kureiji Ollie ×4](https://civitai.com/models/28686)
* [Anya Melfissa ×3](https://civitai.com/models/27935)
* [Pavolia Reine ×3](https://civitai.com/models/15981)
* [Vestia Zeta ×3](https://civitai.com/models/52609)
* [Kaela Kovalskia ×1](https://civitai.com/models/28355)
* [Kobo Kanaeru ×3](https://civitai.com/models/145897)
### HoloJP
* [Tokino Sora ×3](https://civitai.com/models/19432)
* [Roboco-san ×4](https://civitai.com/models/154115)
* [Sakura Miko ×4](https://civitai.com/models/33471)
* [Hoshimachi Suisei ×8](https://civitai.com/models/11765)
* [Azki ×2](https://civitai.com/models/26419)
* [Yozora Mel ×3](https://civitai.com/models/21431)
* [Shirakami Fubuki ×9](https://civitai.com/models/156406)
* [Natsuiro Matsuri ×5](https://civitai.com/models/23883?modelVersionId=28543)
* [Akai Haato ×5](https://civitai.com/models/6489)
* [Aki Rosenthal ×2](https://civitai.com/models/124521)
* [Minato Aqua ×11](https://civitai.com/models/17816)
* [Murasaki Shion ×4](https://civitai.com/models/105212)
* [Nakiri Ayame ×6](https://civitai.com/models/12658)
* [Yuzuki Choco ×1](https://civitai.com/models/20305)
* [Oozora Subaru ×8](https://civitai.com/models/22332)
* [Ookami Mio ×6](https://civitai.com/models/156824)
* [Nekomata Okayu ×6](https://civitai.com/models/23962)
* [Inugami Korone ×7](https://civitai.com/models/153356)
* [Usada Pekora ×9](https://civitai.com/models/27247)
* [Shiranui Flare ×2](https://civitai.com/models/20509)
* [Shirogane Noel ×6](https://civitai.com/models/114191)
* [Houshou Marine ×8](https://civitai.com/models/47510)
* [Uruha Rushia ×9](https://civitai.com/models/36097)
* [Amane Kanata ×7](https://civitai.com/models/124532)
* [Tsunomaki Watame ×5](https://civitai.com/models/9430)
* [Tokoyami Towa ×6](https://civitai.com/models/156139)
* [Himemori Luna ×2](https://civitai.com/models/124564)
* [Kiryu Coco ×5](https://civitai.com/models/97114)
* [Yukihana Lamy ×4](https://civitai.com/models/16876)
* [Momosuzu Nene ×4](https://civitai.com/models/40437)
* [Shishiro Botan ×4](https://civitai.com/models/10390)
* [Omaru Polka ×3](https://civitai.com/models/124570)
* [La+ Darknesss ×1](https://civitai.com/models/14019)
* [Takane Lui ×1](https://civitai.com/models/17673)
* [Hakui Koyori ×3](https://civitai.com/models/21233)
* [Sakamata Chloe ×4](https://civitai.com/models/124502)
* [Kazama Iroha ×3](https://civitai.com/models/149755)
### Others
* [A-chan](https://civitai.com/models/27927)
* [Harusaki Nodoka](https://civitai.com/models/33130)
* [Mano Aloe](https://civitai.com/models/35099)
## Small Loras
Some older Loras were manually scaled down from 144 MB to 18 MB or 36 MB and perform almost the same. They need a higher weight than the original to preserve detail.
This works because dim 128 used to be the default setting for making Loras but that was completely overkill and dim 8/16/32 work just as well.
|
JeukHwang/distilbert-base-uncased-finetuned-squad
|
JeukHwang
| 2023-11-13T05:44:34Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-11-13T04:14:15Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1935 | 1.0 | 8235 | 1.2617 |
| 0.9278 | 2.0 | 16470 | 1.2924 |
| 0.7477 | 3.0 | 24705 | 1.4462 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
xiaoqijun/111
|
xiaoqijun
| 2023-11-13T05:26:25Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:THUDM/chatglm2-6b",
"base_model:adapter:THUDM/chatglm2-6b",
"region:us"
] | null | 2023-11-13T05:25:53Z |
---
library_name: peft
base_model: THUDM/chatglm2-6b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.0
|
dsmsb/16class_combo_vth_new_pp_full_updated_tweet_13nov23_v1
|
dsmsb
| 2023-11-13T05:24:19Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T03:59:16Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 16class_combo_vth_new_pp_full_updated_tweet_13nov23_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 16class_combo_vth_new_pp_full_updated_tweet_13nov23_v1
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Accuracy: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5945 | 1.0 | 735 | 0.7331 | 0.7813 |
| 0.8273 | 2.0 | 1470 | 0.4370 | 0.8743 |
| 0.4943 | 3.0 | 2205 | 0.3176 | 0.9061 |
| 0.3995 | 4.0 | 2940 | 0.2252 | 0.9335 |
| 0.2712 | 5.0 | 3675 | 0.1714 | 0.9517 |
| 0.2352 | 6.0 | 4410 | 0.1183 | 0.9690 |
| 0.1794 | 7.0 | 5145 | 0.0823 | 0.9795 |
| 0.1361 | 8.0 | 5880 | 0.0634 | 0.9861 |
| 0.1111 | 9.0 | 6615 | 0.0514 | 0.9885 |
| 0.0891 | 10.0 | 7350 | 0.0440 | 0.9900 |
| 0.0675 | 11.0 | 8085 | 0.0420 | 0.9908 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
vihangd/stableplats-3b-v1
|
vihangd
| 2023-11-13T05:23:07Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"stablelm_epoch",
"text-generation",
"custom_code",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-11T06:52:08Z |
---
license: cc-by-sa-4.0
---
<p><h1> StablePlats-3b </h1></p>
An experimental finetune of StableLM-3B-4E1T with Alpaca-QLoRA
<h2> Datasets </h2>
Trained on alpca style datasets
<p><h2> Prompt Template </h2></p>
Uses alpaca style prompt template
|
tommylam/A2C-pandaPickAndPlace-v3
|
tommylam
| 2023-11-13T05:18:29Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T05:12:35Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
namkyeong/whisper_1
|
namkyeong
| 2023-11-13T05:14:22Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:S000001",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-13T00:47:30Z |
---
language:
- ko
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- S000001
model-index:
- name: openai/whisper-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the /nas/data/lowband_telephone/wav/training/D01/J01/S000001 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3994
- Cer: 18.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0 | 1000.0 | 1000 | 1.2835 | 17.5 |
| 0.0 | 2000.0 | 2000 | 1.3486 | 18.3333 |
| 0.0 | 3000.0 | 3000 | 1.3850 | 18.3333 |
| 0.0 | 4000.0 | 4000 | 1.3994 | 18.3333 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
|
BlueWard/t5-small-with-generate-finetune-indosum
|
BlueWard
| 2023-11-13T05:13:22Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-13T03:23:33Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-with-generate-finetune-indosum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-with-generate-finetune-indosum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Rouge1: 0.2065
- Rouge2: 0.1572
- Rougel: 0.2026
- Rougelsum: 0.2026
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.7518 | 1.0 | 4460 | 0.6542 | 0.2065 | 0.1572 | 0.2026 | 0.2026 | 19.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.2
|
danabib/elsed_models
|
danabib
| 2023-11-13T05:02:07Z | 4 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-11-13T00:01:27Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-danabib/elsed_models
These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning.
You can find some example images below.
prompt: modern kitchen interior red brick walls

prompt: a man walking at sidewalk, buildins at background, cars parked

|
Hrithik2212/Dr.Llama2-7b-qlora-chat-experimental
|
Hrithik2212
| 2023-11-13T05:01:39Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-11-12T06:47:24Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
athirdpath/Eileithyia-13B-LORA
|
athirdpath
| 2023-11-13T04:59:47Z | 7 | 2 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:KoboldAI/LLaMA2-13B-TiefighterLR",
"base_model:quantized:KoboldAI/LLaMA2-13B-TiefighterLR",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-11-10T05:57:23Z |
---
license: llama2
base_model: KoboldAI/LLaMA2-13B-TiefighterLR
tags:
- generated_from_trainer
model-index:
- name: Eileithyia-13B
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This model is a fine-tuned version of [KoboldAI/LLaMA2-13B-TiefighterLR](https://huggingface.co/KoboldAI/LLaMA2-13B-TiefighterLR) on a private dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9844
## Model description
Eileithyia-13B is an unaligned, roleplay oriented model created by merging KoboldAI/LLaMA2-13B-TiefighterLR with a bespoke LORA trained directly on TiefighterLR.
Eileithyia, as is the current trend, is named after a Greek goddess; in this case it is the goddess of childbirth and pregnancy.
## Training and evaluation data
The private ~400k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
- Medical texts (on pregnancy, reproductive organs, and impregnation). These are formatted so the model, in character as a doctor, answers a patient's question in short to medium form.
- Excerpts from short stories and novellas (erotic, romantic, and platonic) centered around both realistic and fantastic pregnancy. These are sliced into ~2048 token chunks, and these long-form responses are all tied to the command “Enter narrator mode.” in the instructions.
- A selection from PIPPA, using a wide keyword search for related terms then human curated (...the things I’ve seen…). These are converted to Alpaca with “Enter RP mode.” in all the instruction fields.
- ~42k tokens of GPT-4 generated data on pregnancy from various characters’ perspectives, focusing on different responses and stages. Also includes a synopsis for each week in various styles.
- ~18k tokens of GPT-4 generated data on non-maternal role-playing from various characters’ perspectives, focusing on different situations and emotions. Includes many multi-turn conversations.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 40
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8419 | 0.75 | 25 | 2.5257 |
| 1.7748 | 1.5 | 50 | 2.2467 |
| 1.813 | 2.25 | 75 | 2.0914 |
| 1.8067 | 2.99 | 100 | 2.0235 |
| 1.5346 | 3.74 | 125 | 1.9939 |
| 1.5869 | 4.49 | 150 | 1.9844 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Blitz0501/llama_test_case_generation
|
Blitz0501
| 2023-11-13T04:52:18Z | 1 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] | null | 2023-11-12T20:25:43Z |
---
library_name: peft
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2.dev0
|
rollerhafeezh-amikom/xlm-roberta-large-ner-silvanus
|
rollerhafeezh-amikom
| 2023-11-13T04:51:09Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-12T19:58:54Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large-ner-silvanus
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: id
split: validation
args: id
metrics:
- name: Precision
type: precision
value: 0.9574581228396704
- name: Recall
type: recall
value: 0.9664519592055824
- name: F1
type: f1
value: 0.9619340189662082
- name: Accuracy
type: accuracy
value: 0.9889216263995286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-ner-silvanus
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Precision: 0.9575
- Recall: 0.9665
- F1: 0.9619
- Accuracy: 0.9889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 427 | 0.0560 | 0.9339 | 0.9514 | 0.9426 | 0.9828 |
| 0.1405 | 2.0 | 855 | 0.0539 | 0.9430 | 0.9595 | 0.9512 | 0.9859 |
| 0.0449 | 3.0 | 1281 | 0.0495 | 0.9575 | 0.9665 | 0.9619 | 0.9889 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
openskyml/midjourney-mini
|
openskyml
| 2023-11-13T04:22:14Z | 249 | 14 |
diffusers
|
[
"diffusers",
"midjourney",
"midjourney-mini",
"openskyml",
"text-to-image",
"en",
"ru",
"de",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-10-11T17:45:46Z |
---
license: mit
tags:
- midjourney
- midjourney-mini
- openskyml
pinned: true
language:
- en
- ru
- de
- fr
library_name: diffusers
pipeline_tag: text-to-image
---
<h1><center>Midjourney-mini</center></h1>
## Description
Midjourney-mini is a free artificial intelligence model that can create realistic images based on textual descriptions. It has the following advantages:
- **Free:** Midjourney-mini is completely free to use for anyone.
- **High-quality image generation:** The model uses modern deep learning methods to create high-quality images.
- **Ease of use:** Working with Midjourney-mini does not require special programming or machine learning knowledge. The model has a convenient interface and works in your browser.
Although Midjoureymini is a trimmed-down version of the paid MIjdoureny modle, it still provides powerful functionality and can be used in various applications.
# Use
## In Diffusers
```py
from diffusers import DiffusionPipeline
pipeline = DiffusionPipeline.from_pretrained("midjourney-community/midjourney-mini")
```
## Deploy in Spaces
```py
import gradio as gr
gr.Interface.load("models/midjourney-community/midjourney-mini").launch()
```
## Deploy in Inference API
```py
import requests
API_URL = "https://api-inference.huggingface.co/models/midjourney-community/midjourney-mini"
headers = {"Authorization": "Bearer hf_token"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.content
image_bytes = query({
"inputs": "Astronaut riding a horse",
})
```
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/midjourney-community/midjourney-mini",
{
headers: { Authorization: "Bearer hf_token" },
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.blob();
return result;
}
query({"inputs": "Astronaut riding a horse"}).then((response) => {
// Use image
});
```
|
syabusyabu0141/mlm_ro_mix
|
syabusyabu0141
| 2023-11-13T04:20:05Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-13T04:12:34Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: syabusyabu0141/test3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syabusyabu0141/test3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9467
- Validation Loss: 0.7735
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.9467 | 0.7735 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
leowcs/ppo-LundaLander-RL-Tut
|
leowcs
| 2023-11-13T04:15:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T04:14:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.90 +/- 43.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
minh009/my_awesome_model
|
minh009
| 2023-11-13T04:08:33Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-13T04:01:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: minh009/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# minh009/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9951
- Validation Loss: 0.8844
- Train Accuracy: 0.7955
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 125, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.4992 | 1.3056 | 0.5795 | 0 |
| 1.2073 | 1.0418 | 0.625 | 1 |
| 0.9951 | 0.8844 | 0.7955 | 2 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
nikxtaco/a2c-PandaReachDense-v3
|
nikxtaco
| 2023-11-13T03:38:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T03:22:14Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.14 +/- 0.10
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
syabusyabu0141/tc_be_chains
|
syabusyabu0141
| 2023-11-13T03:16:08Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-11-13T03:03:06Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: syabusyabu0141/tc_be_chains
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# syabusyabu0141/tc_be_chains
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0533
- Validation Loss: 0.0324
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0533 | 0.0324 | 0 |
### Framework versions
- Transformers 4.35.0
- TensorFlow 2.14.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
unoooo/llama-7b-hf-tf
|
unoooo
| 2023-11-13T03:06:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-13T01:34:11Z |
---
license: other
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
crumb/horizon-pythia-ft-1.4b
|
crumb
| 2023-11-13T02:47:58Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-13T02:00:31Z |
---
license: apache-2.0
language:
- en
---
validation loss
| model |arxiv | github | books | wiki | webtext |
| --- | --- | --- | --- | --- | --- |
| horizon-pythia-ft-1.4b | **2.05** | **1.23** | **1.90** | **2.12** | **2.61** |
| pythia-1.4b | 2.13 | 1.25 | 1.91 | 2.18 | 2.62 |
horizon-derived reweighting media (mix-13)
| subset | documents |
| --- | --- |
| arxiv | 650 |
| github | 229 |
| books | 645 |
| wiki | 1560 |
| webtext | 10440 |
|
prushton/logo-lora
|
prushton
| 2023-11-13T02:09:10Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-11-12T00:27:53Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - prushton/logo-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the myradeng/random100logos dataset. You can find some example images in the following.

|
yeye776/t5-brokarry-total-v5
|
yeye776
| 2023-11-13T02:05:49Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-large",
"base_model:finetune:paust/pko-t5-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-13T02:02:20Z |
---
license: cc-by-4.0
base_model: paust/pko-t5-large
tags:
- generated_from_trainer
model-index:
- name: t5-brokarry-total-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-brokarry-total-v5
This model is a fine-tuned version of [paust/pko-t5-large](https://huggingface.co/paust/pko-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
DylanJHJ/bert-base-final-v0-ep2
|
DylanJHJ
| 2023-11-13T01:56:16Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-11-13T01:24:24Z |
---
license: apache-2.0
---
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import CrossEntropyLoss, KLDivLoss
from transformers.modeling_outputs import TokenClassifierOutput
from transformers import BertModel, BertPreTrainedModel
class BertForHighlightPrediction(BertPreTrainedModel):
_keys_to_ignore_on_load_unexpected = [r"pooler"]
def __init__(self, config, **model_kwargs):
super().__init__(config)
# self.model_args = model_kargs["model_args"]
self.num_labels = config.num_labels
self.bert = BertModel(config, add_pooling_layer=False)
classifier_dropout = (
config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob
)
self.dropout = nn.Dropout(classifier_dropout)
self.tokens_clf = nn.Linear(config.hidden_size, config.num_labels)
self.tau = model_kwargs.pop('tau', 1)
self.gamma = model_kwargs.pop('gamma', 1)
self.soft_labeling = model_kwargs.pop('soft_labeling', False)
self.init_weights()
self.softmax = nn.Softmax(dim=-1)
def forward(self,
input_ids=None,
probs=None, # soft-labeling
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
tokens_output = outputs[0]
highlight_logits = self.tokens_clf(self.dropout(tokens_output))
loss = None
if labels is not None:
loss_fct = CrossEntropyLoss()
active_loss = attention_mask.view(-1) == 1
active_logits = highlight_logits.view(-1, self.num_labels)
active_labels = torch.where(
active_loss,
labels.view(-1),
torch.tensor(loss_fct.ignore_index).type_as(labels)
)
loss_ce = loss_fct(active_logits, active_labels)
loss_kl = 0
if self.soft_labeling:
loss_fct = KLDivLoss(reduction='sum')
active_mask = (attention_mask * token_type_ids).view(-1, 1) # BL 1
n_active = (active_mask == 1).sum()
active_mask = active_mask.repeat(1, 2) # BL 2
input_logp = F.log_softmax(active_logits / self.tau, -1) # BL 2
target_p = torch.cat(( (1-probs).view(-1, 1), probs.view(-1, 1)), -1) # BL 2
loss_kl = loss_fct(input_logp, target_p * active_mask) / n_active
loss = self.gamma * loss_ce + (1-self.gamma) * loss_kl
# print("Loss:\n")
# print(loss)
# print(loss_kl)
# print(loss_ce)
return TokenClassifierOutput(
loss=loss,
logits=highlight_logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
@torch.no_grad()
def inference(self, outputs):
with torch.no_grad():
outputs = self.forward(**batch_inputs)
probabilities = self.softmax(self.tokens_clf(outputs.hidden_states[-1]))
predictions = torch.argmax(probabilities, dim=-1)
# active filtering
active_tokens = batch_inputs['attention_mask'] == 1
active_predictions = torch.where(
active_tokens,
predictions,
torch.tensor(-1).type_as(predictions)
)
outputs = {
"probabilities": probabilities[:, :, 1].detach(), # shape: (batch, length)
"active_predictions": predictions.detach(),
"active_tokens": active_tokens,
}
return outputs
```
|
Asheron/Taxi-v3
|
Asheron
| 2023-11-13T01:51:27Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T01:51:25Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.81
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Asheron/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jcfneto/bert-tv-portuguese
|
jcfneto
| 2023-11-13T01:45:07Z | 5 | 2 |
transformers
|
[
"transformers",
"tf",
"bert",
"pretraining",
"pt",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-05-03T13:10:38Z |
---
license: mit
language:
- pt
model-index:
- name: bert-tv-portuguese
results: []
---
# BERT-TV
<img src="https://cdn-uploads.huggingface.co/production/uploads/6385e26cc12615765caa6afe/3lSkNEfW57BNudZIFyTH2.png" width=400 height=400>
Image generated by ChatGPT with DALL-E from OpenAI.
## Model description
BERT-TV is a BERT model specifically pre-trained from scratch on a dataset of television reviews in Brazilian Portuguese.
This model is tailored to grasp the nuances and specificities associated with the context and sentiment expressed in
television reviews. BERT-TV features 6 layers, 12 attention heads, and an embedding dimension of 768, making it adept at
handling NLP tasks related to television content in Portuguese.
## Usage ideas
- Sentiment analysis on television reviews in Portuguese
- Recommender systems for television models in Portuguese
- Text classification for different television brands and types in Portuguese
- Named entity recognition in television-related contexts in Portuguese
- Aspect extraction for features and specifications of televisions in Portuguese
- Text generation for summarizing television reviews in Portuguese
## Limitations and bias
As the BERT-TV model is exclusively pre-trained on television reviews in Brazilian Portuguese, its performance may be
limited when applied to other types of text or reviews in different languages. Furthermore, the model could inherit
biases present in the training data, which may influence its predictions or embeddings. The tokenizer is adapted from
the BERTimbau tokenizer, optimized for Brazilian Portuguese, thus it might not deliver optimal results with other
languages or Portuguese dialects.
## Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.1
- Datasets 2.11.0
- Tokenizers 0.13.3
|
victor79/taxi_v3
|
victor79
| 2023-11-13T01:11:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-13T01:11:03Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="victor79/taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MayIBorn/mrpc_qlora-llama-7b_normal
|
MayIBorn
| 2023-11-13T00:54:15Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"region:us"
] | null | 2023-11-13T00:54:10Z |
---
library_name: peft
base_model: huggyllama/llama-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.