modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
begangowsik/HTML-finetunined-WORK-A
|
begangowsik
| 2023-12-24T12:50:24Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Llama-2-7B-Chat-GPTQ",
"base_model:adapter:TheBloke/Llama-2-7B-Chat-GPTQ",
"region:us"
] | null | 2023-12-24T12:50:01Z |
---
library_name: peft
base_model: TheBloke/Llama-2-7b-Chat-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
nurcan/turkishReviews-ds-mini
|
nurcan
| 2023-12-24T12:50:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-23T18:53:50Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_keras_callback
model-index:
- name: turkishReviews-ds-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# turkishReviews-ds-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.2671
- Validation Loss: 8.7544
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -896, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.0918 | 9.2015 | 0 |
| 8.6097 | 8.9164 | 1 |
| 8.2671 | 8.7544 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
mlx-community/Llama-2-7b-chat-4-bit
|
mlx-community
| 2023-12-24T12:49:14Z | 56 | 6 |
mlx
|
[
"mlx",
"llama",
"facebook",
"meta",
"llama-2",
"text-generation",
"license:llama2",
"region:us"
] |
text-generation
| 2023-12-22T16:07:51Z |
---
pipeline_tag: text-generation
library_name: mlx
inference: false
tags:
- facebook
- meta
- llama
- llama-2
- mlx
license: llama2
---
# **Llama 2 7B Chat 4-bit**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in `npz` format suitable for use in Apple's MLX framework.
Weights have been converted to `float16` from the original `bfloat16` type, because `numpy` is not compatible with `bfloat16` out of the box.
How to use with [MLX](https://github.com/ml-explore/mlx).
```bash
# Install mlx, mlx-examples, huggingface-cli
pip install mlx
pip install huggingface_hub hf_transfer
git clone https://github.com/ml-explore/mlx-examples.git
# Download model
export HF_HUB_ENABLE_HF_TRANSFER=1
huggingface-cli download --local-dir Llama-2-7b-chat-mlx/ mlx-community/Llama-2-7b-chat-4-bit
# Run example
python mlx-examples/llms/llama/llama.py --prompt "My name is " --model-path Llama-2-7b-chat-mlx/
```
Please, refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat) for details on Llama 2.
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError0.0_Seed104
|
behzadnet
| 2023-12-24T12:47:48Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T12:47:44Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError0.0_Seed104
|
behzadnet
| 2023-12-24T12:47:38Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T12:47:33Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Q-bert/Merged-AGI-7B
|
Q-bert
| 2023-12-24T12:41:18Z | 56 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Math",
"merge",
"en",
"dataset:meta-math/MetaMathQA",
"base_model:Q-bert/MetaMath-Cybertron-Starling",
"base_model:merge:Q-bert/MetaMath-Cybertron-Starling",
"base_model:fblgit/juanako-7b-UNA",
"base_model:merge:fblgit/juanako-7b-UNA",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-10T09:20:47Z |
---
license: cc-by-nc-4.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
- merge
base_model:
- Q-bert/MetaMath-Cybertron-Starling
- fblgit/juanako-7b-UNA
---
## Merged-AGI-7B
Merge [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling) and [fblgit/juanako-7b-UNA](https://huggingface.co/fblgit/juanako-7b-UNA) using slerp merge.
You can use ChatML format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Coming soon]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | Coming soon |
| ARC (25-shot) | Coming soon |
| HellaSwag (10-shot) | Coming soon |
| MMLU (5-shot) | Coming soon |
| TruthfulQA (0-shot) | Coming soon |
| Winogrande (5-shot) | Coming soon |
| GSM8K (5-shot) | Coming soon |
|
chanhua/autotrain-xcbf5-99oqk
|
chanhua
| 2023-12-24T12:37:19Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:chanhua/autotrain-data-autotrain-xcbf5-99oqk",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:37:14Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- chanhua/autotrain-data-autotrain-xcbf5-99oqk
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.094420313835144
f1_macro: 0.45714285714285713
f1_micro: 0.5714285714285714
f1_weighted: 0.5061224489795919
precision_macro: 0.4666666666666666
precision_micro: 0.5714285714285714
precision_weighted: 0.5428571428571428
recall_macro: 0.5555555555555555
recall_micro: 0.5714285714285714
recall_weighted: 0.5714285714285714
accuracy: 0.5714285714285714
|
hkivancoral/hushem_40x_deit_base_rms_001_fold1
|
hkivancoral
| 2023-12-24T12:36:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T11:53:55Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_rms_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_rms_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 7.0637
- Accuracy: 0.5333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3874 | 1.0 | 215 | 1.5845 | 0.2667 |
| 1.2829 | 2.0 | 430 | 1.2221 | 0.4 |
| 0.7962 | 3.0 | 645 | 2.1065 | 0.4 |
| 0.7528 | 4.0 | 860 | 1.0651 | 0.5556 |
| 0.6029 | 5.0 | 1075 | 1.5642 | 0.4889 |
| 0.6246 | 6.0 | 1290 | 1.7962 | 0.4222 |
| 0.589 | 7.0 | 1505 | 1.4819 | 0.4444 |
| 0.6081 | 8.0 | 1720 | 1.4452 | 0.4222 |
| 0.4808 | 9.0 | 1935 | 1.4389 | 0.4444 |
| 0.4155 | 10.0 | 2150 | 1.7698 | 0.4667 |
| 0.4393 | 11.0 | 2365 | 1.4569 | 0.5778 |
| 0.4007 | 12.0 | 2580 | 2.1115 | 0.4 |
| 0.3758 | 13.0 | 2795 | 1.5230 | 0.5556 |
| 0.3244 | 14.0 | 3010 | 2.2901 | 0.4444 |
| 0.3063 | 15.0 | 3225 | 2.0129 | 0.4889 |
| 0.3072 | 16.0 | 3440 | 2.2969 | 0.5333 |
| 0.2444 | 17.0 | 3655 | 2.5054 | 0.4667 |
| 0.2293 | 18.0 | 3870 | 2.3449 | 0.4889 |
| 0.2391 | 19.0 | 4085 | 2.0401 | 0.6444 |
| 0.1843 | 20.0 | 4300 | 2.7271 | 0.5333 |
| 0.2073 | 21.0 | 4515 | 2.2599 | 0.4889 |
| 0.194 | 22.0 | 4730 | 3.1378 | 0.4444 |
| 0.2943 | 23.0 | 4945 | 2.7236 | 0.5333 |
| 0.2089 | 24.0 | 5160 | 2.5054 | 0.5778 |
| 0.2145 | 25.0 | 5375 | 3.8073 | 0.4667 |
| 0.1232 | 26.0 | 5590 | 3.5697 | 0.4889 |
| 0.1349 | 27.0 | 5805 | 3.5985 | 0.5333 |
| 0.1548 | 28.0 | 6020 | 3.0930 | 0.4889 |
| 0.0655 | 29.0 | 6235 | 4.3232 | 0.4889 |
| 0.1304 | 30.0 | 6450 | 3.6994 | 0.5333 |
| 0.0997 | 31.0 | 6665 | 3.7329 | 0.5333 |
| 0.0825 | 32.0 | 6880 | 3.4793 | 0.5333 |
| 0.154 | 33.0 | 7095 | 5.2562 | 0.4667 |
| 0.1206 | 34.0 | 7310 | 4.5299 | 0.4889 |
| 0.1019 | 35.0 | 7525 | 3.6522 | 0.5111 |
| 0.019 | 36.0 | 7740 | 3.9235 | 0.5333 |
| 0.0485 | 37.0 | 7955 | 4.7342 | 0.5556 |
| 0.0155 | 38.0 | 8170 | 4.4779 | 0.5778 |
| 0.0142 | 39.0 | 8385 | 4.2139 | 0.5556 |
| 0.0256 | 40.0 | 8600 | 5.0724 | 0.5333 |
| 0.0211 | 41.0 | 8815 | 4.8895 | 0.4889 |
| 0.019 | 42.0 | 9030 | 4.8291 | 0.5556 |
| 0.0047 | 43.0 | 9245 | 5.9102 | 0.5333 |
| 0.0027 | 44.0 | 9460 | 5.9480 | 0.5556 |
| 0.0009 | 45.0 | 9675 | 6.2260 | 0.5333 |
| 0.0008 | 46.0 | 9890 | 6.6029 | 0.5556 |
| 0.0001 | 47.0 | 10105 | 6.7925 | 0.5556 |
| 0.0001 | 48.0 | 10320 | 6.7039 | 0.5333 |
| 0.0 | 49.0 | 10535 | 7.0556 | 0.5333 |
| 0.0001 | 50.0 | 10750 | 7.0637 | 0.5333 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Chhaya/results
|
Chhaya
| 2023-12-24T12:34:29Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2023-12-24T12:32:50Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Llama-2-7b-chat-hf
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
chanhua/autotrain-82nel-cfd2f
|
chanhua
| 2023-12-24T12:30:01Z | 25 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:chanhua/autotrain-data-autotrain-82nel-cfd2f",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T12:29:55Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- chanhua/autotrain-data-autotrain-82nel-cfd2f
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.06666666666666668
f1_micro: 0.20000000000000004
f1_weighted: 0.06666666666666668
precision_macro: 0.04
precision_micro: 0.2
precision_weighted: 0.04
recall_macro: 0.2
recall_micro: 0.2
recall_weighted: 0.2
accuracy: 0.2
|
helpmefindaname/flair-eml-biobert-bc5cdr-chemical
|
helpmefindaname
| 2023-12-24T12:20:57Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-23T22:50:52Z |
---
tags:
- flair
- entity-mention-linker
---
## biobert-bc5cdr-chemical
Biomedical Entity Mention Linking for chemical
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-biobert-bc5cdr-chemical")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-biobert-bc5cdr-chemical", "chemical", dictionary_name_or_path="ctd-chemicals", hybrid_search=False, entity_type="chemical-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-biobert-bc5cdr-chemical](dmis-lab/biosyn-biobert-bc5cdr-chemical) as embeddings for linking mentions to candidates.
|
helpmefindaname/flair-eml-biobert-bc2gn-gene
|
helpmefindaname
| 2023-12-24T12:20:40Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T06:10:56Z |
---
tags:
- flair
- entity-mention-linker
---
## biobert-bc2gn-gene
Biomedical Entity Mention Linking for gene
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-biobert-bc2gn-gene")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-biobert-bc2gn", "gene", dictionary_name_or_path="ncbi-gene", hybrid_search=False, entity_type="gene-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-biobert-bc2gn](dmis-lab/biosyn-biobert-bc2gn) as embeddings for linking mentions to candidates.
|
jgodding/ppo-LunarLander-v2
|
jgodding
| 2023-12-24T12:10:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T12:10:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.91 +/- 17.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MadFritz/LunarLander-v2
|
MadFritz
| 2023-12-24T12:10:15Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-24T12:08:53Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -159.28 +/- 133.83
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo-LunarLander'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'MadFritz/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
ThuyNT03/KLTN_COQE_viT5_total_ASPOL_v4
|
ThuyNT03
| 2023-12-24T12:09:33Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T11:27:28Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_ASPOL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_ASPOL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ThuyNT03/KLTN_COQE_viT5_total_APSOL_v4
|
ThuyNT03
| 2023-12-24T12:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T10:43:27Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_APSOL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_APSOL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24
|
GordonMcGregor
| 2023-12-24T11:59:14Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-24T07:13:19Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_0.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_1.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_2.png"
- text: 'a photo of TOK man in a fedora'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK man
license: openrail++
---
# SDXL LoRA DreamBooth - GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24
<Gallery />
## Model description
These are GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](GordonMcGregor/stable-diffusion-xl-base-1.0-lora-TOK-Gordon_dec_24/tree/main) them in the Files & versions tab.
|
lemoneresearch/tsdae-lemone-mbert-tax
|
lemoneresearch
| 2023-12-24T11:49:09Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"legal",
"french-law",
"droit français",
"tax",
"droit fiscal",
"fiscalité",
"fr",
"dataset:louisbrulenaudet/lpf",
"dataset:louisbrulenaudet/cgi",
"dataset:louisbrulenaudet/code-douanes",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-24T11:31:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- legal
- french-law
- droit français
- tax
- droit fiscal
- fiscalité
license: apache-2.0
pretty_name: Domain-adapted mBERT for French Tax Practice
datasets:
- louisbrulenaudet/lpf
- louisbrulenaudet/cgi
- louisbrulenaudet/code-douanes
language:
- fr
library_name: sentence-transformers
---
# Domain-adapted mBERT for French Tax Practice
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
Pretrained transformers model on the top 102 languages with the largest Wikipedia using a masked language modeling (MLM) objective, fitted using Transformer-based Sequential Denoising Auto-Encoder for unsupervised sentence embedding learning with one objective : french tax domain adaptation.
This way, the model learns an inner representation of the french legal language in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the model as inputs.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer("louisbrulenaudet/tsdae-lemone-mbert-tax")
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("louisbrulenaudet/tsdae-lemone-mbert-tax")
model = AutoModel.from_pretrained("louisbrulenaudet/tsdae-lemone-mbert-tax")
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt")
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input["attention_mask"])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5507 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "constantlr",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
If you use this code in your research, please use the following BibTeX entry.
```BibTeX
@misc{louisbrulenaudet2023,
author = {Louis Brulé Naudet},
title = {Domain-adapted mBERT for French Tax Practice},
year = {2023}
howpublished = {\url{https://huggingface.co/louisbrulenaudet/tsdae-lemone-mbert-tax}},
}
```
## Feedback
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).
|
csujeong/Falcon-7b-Finetuned-Financial-Stock
|
csujeong
| 2023-12-24T11:48:40Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-12-24T11:40:14Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: tiiuae/falcon-7b
model-index:
- name: Falcon-7b-Finetuned-Financial-Stock
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Falcon-7b-Finetuned-Financial-Stock
This model is a fine-tuned version of [tiiuae/falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 60
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
YagiASAFAS/distilbert-base-uncased-finetuned-emotion
|
YagiASAFAS
| 2023-12-24T11:45:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T11:02:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1710
- eval_accuracy: 0.9295
- eval_f1: 0.9302
- eval_runtime: 11.2289
- eval_samples_per_second: 178.112
- eval_steps_per_second: 2.85
- epoch: 1.0
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.0.dev20231211
- Datasets 2.15.0
- Tokenizers 0.11.0
|
yuanhuaisen/autotrain-r6fhf-a4d7f
|
yuanhuaisen
| 2023-12-24T11:18:54Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:yuanhuaisen/autotrain-data-autotrain-r6fhf-a4d7f",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T11:18:20Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- yuanhuaisen/autotrain-data-autotrain-r6fhf-a4d7f
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.4276469945907593
f1_macro: 0.8530901722391085
f1_micro: 0.875
f1_weighted: 0.878488532743852
precision_macro: 0.8621098104793757
precision_micro: 0.875
precision_weighted: 0.8893636933718457
recall_macro: 0.8544277360066833
recall_micro: 0.875
recall_weighted: 0.875
accuracy: 0.875
|
ThuyNT03/KLTN_COQE_viT5_total_PASOL_v4
|
ThuyNT03
| 2023-12-24T11:18:23Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T09:57:52Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_PASOL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_PASOL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
clewiston/autotrain-vlxo9-2s7eh
|
clewiston
| 2023-12-24T11:16:20Z | 27 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:clewiston/autotrain-data-autotrain-vlxo9-2s7eh",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T11:15:32Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- clewiston/autotrain-data-autotrain-vlxo9-2s7eh
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 1.347914457321167
f1_macro: 0.196969696969697
f1_micro: 0.65
f1_weighted: 0.5121212121212122
precision_macro: 0.1625
precision_micro: 0.65
precision_weighted: 0.42250000000000004
recall_macro: 0.25
recall_micro: 0.65
recall_weighted: 0.65
accuracy: 0.65
|
iForgotMyName8008/ppo-Huggy
|
iForgotMyName8008
| 2023-12-24T11:01:02Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-24T11:00:56Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: iForgotMyName8008/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/hushem_40x_deit_base_sgd_0001_fold4
|
hkivancoral
| 2023-12-24T11:00:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T10:19:12Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_0001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.42857142857142855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_0001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2151
- Accuracy: 0.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3918 | 1.0 | 219 | 1.4045 | 0.3095 |
| 1.3704 | 2.0 | 438 | 1.3956 | 0.3095 |
| 1.3491 | 3.0 | 657 | 1.3880 | 0.3333 |
| 1.3369 | 4.0 | 876 | 1.3811 | 0.3333 |
| 1.3406 | 5.0 | 1095 | 1.3747 | 0.3333 |
| 1.3171 | 6.0 | 1314 | 1.3686 | 0.3333 |
| 1.2982 | 7.0 | 1533 | 1.3628 | 0.3571 |
| 1.2896 | 8.0 | 1752 | 1.3571 | 0.3571 |
| 1.2549 | 9.0 | 1971 | 1.3513 | 0.3810 |
| 1.2384 | 10.0 | 2190 | 1.3457 | 0.4048 |
| 1.2507 | 11.0 | 2409 | 1.3401 | 0.4286 |
| 1.2362 | 12.0 | 2628 | 1.3346 | 0.4286 |
| 1.1966 | 13.0 | 2847 | 1.3293 | 0.4286 |
| 1.2279 | 14.0 | 3066 | 1.3240 | 0.4286 |
| 1.2136 | 15.0 | 3285 | 1.3188 | 0.4286 |
| 1.1856 | 16.0 | 3504 | 1.3138 | 0.4286 |
| 1.1941 | 17.0 | 3723 | 1.3088 | 0.4286 |
| 1.1805 | 18.0 | 3942 | 1.3039 | 0.4286 |
| 1.1554 | 19.0 | 4161 | 1.2991 | 0.4048 |
| 1.1709 | 20.0 | 4380 | 1.2943 | 0.4048 |
| 1.1523 | 21.0 | 4599 | 1.2895 | 0.4048 |
| 1.138 | 22.0 | 4818 | 1.2848 | 0.4048 |
| 1.0984 | 23.0 | 5037 | 1.2803 | 0.4048 |
| 1.1405 | 24.0 | 5256 | 1.2759 | 0.4048 |
| 1.1028 | 25.0 | 5475 | 1.2716 | 0.4286 |
| 1.1236 | 26.0 | 5694 | 1.2674 | 0.4286 |
| 1.0819 | 27.0 | 5913 | 1.2634 | 0.4286 |
| 1.1245 | 28.0 | 6132 | 1.2595 | 0.4286 |
| 1.0929 | 29.0 | 6351 | 1.2557 | 0.4286 |
| 1.0861 | 30.0 | 6570 | 1.2521 | 0.4048 |
| 1.082 | 31.0 | 6789 | 1.2486 | 0.4048 |
| 1.0826 | 32.0 | 7008 | 1.2452 | 0.4048 |
| 1.0889 | 33.0 | 7227 | 1.2420 | 0.4048 |
| 1.052 | 34.0 | 7446 | 1.2390 | 0.4286 |
| 1.056 | 35.0 | 7665 | 1.2361 | 0.4286 |
| 1.0391 | 36.0 | 7884 | 1.2333 | 0.4286 |
| 1.0236 | 37.0 | 8103 | 1.2307 | 0.4286 |
| 1.0474 | 38.0 | 8322 | 1.2283 | 0.4286 |
| 1.0069 | 39.0 | 8541 | 1.2261 | 0.4286 |
| 1.0443 | 40.0 | 8760 | 1.2242 | 0.4286 |
| 1.0711 | 41.0 | 8979 | 1.2223 | 0.4048 |
| 1.053 | 42.0 | 9198 | 1.2207 | 0.4286 |
| 1.0356 | 43.0 | 9417 | 1.2193 | 0.4286 |
| 1.0491 | 44.0 | 9636 | 1.2181 | 0.4286 |
| 0.9928 | 45.0 | 9855 | 1.2171 | 0.4286 |
| 1.0402 | 46.0 | 10074 | 1.2163 | 0.4286 |
| 1.0792 | 47.0 | 10293 | 1.2157 | 0.4286 |
| 1.0146 | 48.0 | 10512 | 1.2153 | 0.4286 |
| 1.0325 | 49.0 | 10731 | 1.2152 | 0.4286 |
| 1.0249 | 50.0 | 10950 | 1.2151 | 0.4286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_base_sgd_00001_fold4
|
hkivancoral
| 2023-12-24T10:59:01Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T10:16:24Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.30952380952380953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3940
- Accuracy: 0.3095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4075 | 1.0 | 219 | 1.4139 | 0.2857 |
| 1.4006 | 2.0 | 438 | 1.4128 | 0.2857 |
| 1.397 | 3.0 | 657 | 1.4119 | 0.2857 |
| 1.4009 | 4.0 | 876 | 1.4109 | 0.2857 |
| 1.4192 | 5.0 | 1095 | 1.4100 | 0.2857 |
| 1.4068 | 6.0 | 1314 | 1.4091 | 0.2857 |
| 1.4024 | 7.0 | 1533 | 1.4083 | 0.2857 |
| 1.3965 | 8.0 | 1752 | 1.4075 | 0.2857 |
| 1.3783 | 9.0 | 1971 | 1.4067 | 0.3095 |
| 1.3738 | 10.0 | 2190 | 1.4060 | 0.3095 |
| 1.3936 | 11.0 | 2409 | 1.4053 | 0.3095 |
| 1.3746 | 12.0 | 2628 | 1.4046 | 0.3095 |
| 1.3536 | 13.0 | 2847 | 1.4040 | 0.3095 |
| 1.4005 | 14.0 | 3066 | 1.4033 | 0.3095 |
| 1.3798 | 15.0 | 3285 | 1.4027 | 0.3095 |
| 1.3748 | 16.0 | 3504 | 1.4022 | 0.3095 |
| 1.3581 | 17.0 | 3723 | 1.4016 | 0.3095 |
| 1.3695 | 18.0 | 3942 | 1.4011 | 0.3095 |
| 1.366 | 19.0 | 4161 | 1.4006 | 0.3095 |
| 1.3735 | 20.0 | 4380 | 1.4001 | 0.3095 |
| 1.3732 | 21.0 | 4599 | 1.3997 | 0.3095 |
| 1.3632 | 22.0 | 4818 | 1.3992 | 0.3095 |
| 1.3525 | 23.0 | 5037 | 1.3988 | 0.3095 |
| 1.3845 | 24.0 | 5256 | 1.3984 | 0.3095 |
| 1.363 | 25.0 | 5475 | 1.3980 | 0.3095 |
| 1.3693 | 26.0 | 5694 | 1.3977 | 0.3095 |
| 1.3693 | 27.0 | 5913 | 1.3973 | 0.3095 |
| 1.3914 | 28.0 | 6132 | 1.3970 | 0.3095 |
| 1.3857 | 29.0 | 6351 | 1.3967 | 0.3095 |
| 1.3681 | 30.0 | 6570 | 1.3964 | 0.3095 |
| 1.3619 | 31.0 | 6789 | 1.3962 | 0.3095 |
| 1.3666 | 32.0 | 7008 | 1.3959 | 0.3095 |
| 1.3733 | 33.0 | 7227 | 1.3957 | 0.3095 |
| 1.3572 | 34.0 | 7446 | 1.3955 | 0.3095 |
| 1.3715 | 35.0 | 7665 | 1.3953 | 0.3095 |
| 1.3581 | 36.0 | 7884 | 1.3951 | 0.3095 |
| 1.3453 | 37.0 | 8103 | 1.3949 | 0.3095 |
| 1.3666 | 38.0 | 8322 | 1.3948 | 0.3095 |
| 1.3416 | 39.0 | 8541 | 1.3946 | 0.3095 |
| 1.3435 | 40.0 | 8760 | 1.3945 | 0.3095 |
| 1.3731 | 41.0 | 8979 | 1.3944 | 0.3095 |
| 1.3652 | 42.0 | 9198 | 1.3943 | 0.3095 |
| 1.3499 | 43.0 | 9417 | 1.3942 | 0.3095 |
| 1.3629 | 44.0 | 9636 | 1.3941 | 0.3095 |
| 1.3332 | 45.0 | 9855 | 1.3941 | 0.3095 |
| 1.3535 | 46.0 | 10074 | 1.3940 | 0.3095 |
| 1.3876 | 47.0 | 10293 | 1.3940 | 0.3095 |
| 1.363 | 48.0 | 10512 | 1.3940 | 0.3095 |
| 1.3575 | 49.0 | 10731 | 1.3940 | 0.3095 |
| 1.3466 | 50.0 | 10950 | 1.3940 | 0.3095 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ntc-ai/SDXL-LoRA-slider.deep-soulful-olive-eyes
|
ntc-ai
| 2023-12-24T10:45:03Z | 123 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-24T10:44:59Z |
---
language:
- en
thumbnail: "images/evaluate/deep soulful olive eyes...regular eye color/deep soulful olive eyes_17_3.0.png"
widget:
- text: deep soulful olive eyes
output:
url: images/deep soulful olive eyes_17_3.0.png
- text: deep soulful olive eyes
output:
url: images/deep soulful olive eyes_19_3.0.png
- text: deep soulful olive eyes
output:
url: images/deep soulful olive eyes_20_3.0.png
- text: deep soulful olive eyes
output:
url: images/deep soulful olive eyes_21_3.0.png
- text: deep soulful olive eyes
output:
url: images/deep soulful olive eyes_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "deep soulful olive eyes"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - deep soulful olive eyes (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/deep soulful olive eyes_17_-3.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_17_0.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_17_3.0.png" width=256 height=256 /> |
| <img src="images/deep soulful olive eyes_19_-3.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_19_0.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_19_3.0.png" width=256 height=256 /> |
| <img src="images/deep soulful olive eyes_20_-3.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_20_0.0.png" width=256 height=256 /> | <img src="images/deep soulful olive eyes_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
deep soulful olive eyes
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.deep-soulful-olive-eyes', weight_name='deep soulful olive eyes.safetensors', adapter_name="deep soulful olive eyes")
# Activate the LoRA
pipe.set_adapters(["deep soulful olive eyes"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, deep soulful olive eyes"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 590+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
FirstLast/RealisticVision-LoRA-lidrs-4
|
FirstLast
| 2023-12-24T10:26:47Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V5.1_noVAE",
"base_model:adapter:SG161222/Realistic_Vision_V5.1_noVAE",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-24T08:24:43Z |
---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V5.1_noVAE
instance_prompt: a woman in a lidrs dress
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - FirstLast/RealisticVision-LoRA-lidrs-4
These are LoRA adaption weights for SG161222/Realistic_Vision_V5.1_noVAE. The weights were trained on a woman in a lidrs dress using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
hkivancoral/hushem_40x_deit_base_sgd_0001_fold3
|
hkivancoral
| 2023-12-24T10:19:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T09:36:09Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_0001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.46511627906976744
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_0001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2710
- Accuracy: 0.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.3787 | 1.0 | 217 | 1.4567 | 0.2326 |
| 1.3411 | 2.0 | 434 | 1.4476 | 0.2326 |
| 1.3346 | 3.0 | 651 | 1.4398 | 0.2326 |
| 1.3522 | 4.0 | 868 | 1.4325 | 0.2558 |
| 1.295 | 5.0 | 1085 | 1.4257 | 0.2558 |
| 1.3027 | 6.0 | 1302 | 1.4192 | 0.2791 |
| 1.2908 | 7.0 | 1519 | 1.4129 | 0.3023 |
| 1.2684 | 8.0 | 1736 | 1.4068 | 0.3023 |
| 1.2597 | 9.0 | 1953 | 1.4007 | 0.3023 |
| 1.2504 | 10.0 | 2170 | 1.3948 | 0.3023 |
| 1.2181 | 11.0 | 2387 | 1.3891 | 0.3023 |
| 1.2286 | 12.0 | 2604 | 1.3834 | 0.3023 |
| 1.229 | 13.0 | 2821 | 1.3779 | 0.3023 |
| 1.2118 | 14.0 | 3038 | 1.3725 | 0.3256 |
| 1.1939 | 15.0 | 3255 | 1.3673 | 0.3256 |
| 1.2054 | 16.0 | 3472 | 1.3622 | 0.3488 |
| 1.1836 | 17.0 | 3689 | 1.3572 | 0.3721 |
| 1.1754 | 18.0 | 3906 | 1.3524 | 0.3721 |
| 1.1872 | 19.0 | 4123 | 1.3477 | 0.3721 |
| 1.1652 | 20.0 | 4340 | 1.3431 | 0.3721 |
| 1.1396 | 21.0 | 4557 | 1.3387 | 0.3721 |
| 1.1373 | 22.0 | 4774 | 1.3343 | 0.3953 |
| 1.1381 | 23.0 | 4991 | 1.3300 | 0.3953 |
| 1.101 | 24.0 | 5208 | 1.3259 | 0.3953 |
| 1.1305 | 25.0 | 5425 | 1.3219 | 0.4186 |
| 1.1458 | 26.0 | 5642 | 1.3181 | 0.4186 |
| 1.0969 | 27.0 | 5859 | 1.3143 | 0.4186 |
| 1.092 | 28.0 | 6076 | 1.3106 | 0.4186 |
| 1.0422 | 29.0 | 6293 | 1.3071 | 0.4186 |
| 1.07 | 30.0 | 6510 | 1.3037 | 0.4419 |
| 1.097 | 31.0 | 6727 | 1.3005 | 0.4419 |
| 1.1048 | 32.0 | 6944 | 1.2974 | 0.4419 |
| 1.0657 | 33.0 | 7161 | 1.2945 | 0.4419 |
| 1.0841 | 34.0 | 7378 | 1.2918 | 0.4419 |
| 1.0697 | 35.0 | 7595 | 1.2891 | 0.4419 |
| 1.0586 | 36.0 | 7812 | 1.2867 | 0.4419 |
| 1.0346 | 37.0 | 8029 | 1.2845 | 0.4419 |
| 1.0364 | 38.0 | 8246 | 1.2824 | 0.4651 |
| 1.055 | 39.0 | 8463 | 1.2804 | 0.4651 |
| 1.0391 | 40.0 | 8680 | 1.2787 | 0.4651 |
| 1.0408 | 41.0 | 8897 | 1.2771 | 0.4651 |
| 1.0911 | 42.0 | 9114 | 1.2757 | 0.4651 |
| 1.042 | 43.0 | 9331 | 1.2745 | 0.4651 |
| 1.0562 | 44.0 | 9548 | 1.2735 | 0.4651 |
| 1.0444 | 45.0 | 9765 | 1.2727 | 0.4651 |
| 1.0551 | 46.0 | 9982 | 1.2720 | 0.4651 |
| 1.0314 | 47.0 | 10199 | 1.2715 | 0.4651 |
| 1.067 | 48.0 | 10416 | 1.2712 | 0.4651 |
| 1.0573 | 49.0 | 10633 | 1.2710 | 0.4651 |
| 1.0022 | 50.0 | 10850 | 1.2710 | 0.4651 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
ThuyNT03/KLTN_COQE_viT5_total_APSOL_v2
|
ThuyNT03
| 2023-12-24T10:03:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-23T00:56:54Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_APSOL_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_APSOL_v2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ThuyNT03/KLTN_COQE_viT5_total_POASL_v4
|
ThuyNT03
| 2023-12-24T09:57:44Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-24T09:16:51Z |
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_POASL_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_POASL_v4
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Pianokill/fashion-sdxl-lora-training-with-256x256
|
Pianokill
| 2023-12-24T09:55:50Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"en",
"vi",
"dataset:Pianokill/SportFashionImages_256x256",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2023-12-24T01:33:30Z |
---
license: apache-2.0
datasets:
- Pianokill/SportFashionImages_256x256
language:
- en
- vi
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-to-image
---
|
Anwaarma/Merged-MM-praj
|
Anwaarma
| 2023-12-24T09:49:59Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T09:43:11Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Merged-MM-praj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Merged-MM-praj
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5525
- Accuracy: 0.7777
- F1: 0.8749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 0.6929 | 0.526 | 0.3813 |
| No log | 0.0 | 100 | 0.6938 | 0.48 | 0.3125 |
| No log | 0.01 | 150 | 0.6971 | 0.479 | 0.3103 |
| No log | 0.01 | 200 | 0.6948 | 0.479 | 0.3103 |
| No log | 0.01 | 250 | 0.6938 | 0.479 | 0.3103 |
| No log | 0.01 | 300 | 0.6939 | 0.479 | 0.3103 |
| No log | 0.01 | 350 | 0.6927 | 0.521 | 0.3587 |
| No log | 0.02 | 400 | 0.6931 | 0.501 | 0.4988 |
| No log | 0.02 | 450 | 0.6944 | 0.479 | 0.3103 |
| 0.6942 | 0.02 | 500 | 0.6954 | 0.479 | 0.3103 |
| 0.6942 | 0.02 | 550 | 0.6960 | 0.479 | 0.3103 |
| 0.6942 | 0.02 | 600 | 0.6934 | 0.486 | 0.3322 |
| 0.6942 | 0.02 | 650 | 0.6970 | 0.479 | 0.3103 |
| 0.6942 | 0.03 | 700 | 0.6929 | 0.535 | 0.4767 |
| 0.6942 | 0.03 | 750 | 0.6931 | 0.499 | 0.4609 |
| 0.6942 | 0.03 | 800 | 0.6952 | 0.479 | 0.3103 |
| 0.6942 | 0.03 | 850 | 0.6933 | 0.48 | 0.3160 |
| 0.6942 | 0.03 | 900 | 0.6979 | 0.479 | 0.3103 |
| 0.6942 | 0.04 | 950 | 0.6940 | 0.479 | 0.3103 |
| 0.6938 | 0.04 | 1000 | 0.6915 | 0.521 | 0.3569 |
| 0.6938 | 0.04 | 1050 | 0.6942 | 0.479 | 0.3103 |
| 0.6938 | 0.04 | 1100 | 0.6884 | 0.519 | 0.3630 |
| 0.6938 | 0.04 | 1150 | 0.6849 | 0.596 | 0.5817 |
| 0.6938 | 0.05 | 1200 | 0.6849 | 0.547 | 0.5131 |
| 0.6938 | 0.05 | 1250 | 0.6771 | 0.568 | 0.5502 |
| 0.6938 | 0.05 | 1300 | 0.6792 | 0.572 | 0.5558 |
| 0.6938 | 0.05 | 1350 | 0.6889 | 0.55 | 0.5161 |
| 0.6938 | 0.05 | 1400 | 0.6792 | 0.59 | 0.5828 |
| 0.6938 | 0.06 | 1450 | 0.6729 | 0.602 | 0.5987 |
| 0.6781 | 0.06 | 1500 | 0.6702 | 0.592 | 0.5822 |
| 0.6781 | 0.06 | 1550 | 0.6711 | 0.578 | 0.5633 |
| 0.6781 | 0.06 | 1600 | 0.6642 | 0.607 | 0.6024 |
| 0.6781 | 0.06 | 1650 | 0.6624 | 0.592 | 0.5819 |
| 0.6781 | 0.07 | 1700 | 0.6585 | 0.595 | 0.5883 |
| 0.6781 | 0.07 | 1750 | 0.6543 | 0.584 | 0.5740 |
| 0.6781 | 0.07 | 1800 | 0.6452 | 0.6 | 0.5926 |
| 0.6781 | 0.07 | 1850 | 0.6355 | 0.615 | 0.6106 |
| 0.6781 | 0.07 | 1900 | 0.6280 | 0.615 | 0.6090 |
| 0.6781 | 0.07 | 1950 | 0.6209 | 0.621 | 0.6139 |
| 0.6465 | 0.08 | 2000 | 0.6178 | 0.632 | 0.6247 |
| 0.6465 | 0.08 | 2050 | 0.6133 | 0.641 | 0.6303 |
| 0.6465 | 0.08 | 2100 | 0.6132 | 0.629 | 0.6218 |
| 0.6465 | 0.08 | 2150 | 0.6155 | 0.63 | 0.6289 |
| 0.6465 | 0.08 | 2200 | 0.5984 | 0.635 | 0.6322 |
| 0.6465 | 0.09 | 2250 | 0.6065 | 0.633 | 0.6102 |
| 0.6465 | 0.09 | 2300 | 0.5968 | 0.629 | 0.6063 |
| 0.6465 | 0.09 | 2350 | 0.5871 | 0.649 | 0.6411 |
| 0.6465 | 0.09 | 2400 | 0.5824 | 0.64 | 0.6218 |
| 0.6465 | 0.09 | 2450 | 0.5812 | 0.643 | 0.6390 |
| 0.6042 | 0.1 | 2500 | 0.5790 | 0.644 | 0.6355 |
| 0.6042 | 0.1 | 2550 | 0.5744 | 0.654 | 0.6507 |
| 0.6042 | 0.1 | 2600 | 0.5679 | 0.641 | 0.6292 |
| 0.6042 | 0.1 | 2650 | 0.5707 | 0.644 | 0.6311 |
| 0.6042 | 0.1 | 2700 | 0.5707 | 0.652 | 0.6439 |
| 0.6042 | 0.11 | 2750 | 0.5680 | 0.661 | 0.6569 |
| 0.6042 | 0.11 | 2800 | 0.5592 | 0.67 | 0.6684 |
| 0.6042 | 0.11 | 2850 | 0.5557 | 0.678 | 0.6758 |
| 0.6042 | 0.11 | 2900 | 0.5579 | 0.671 | 0.6690 |
| 0.6042 | 0.11 | 2950 | 0.5490 | 0.692 | 0.6909 |
| 0.5834 | 0.11 | 3000 | 0.5474 | 0.688 | 0.6858 |
| 0.5834 | 0.12 | 3050 | 0.5447 | 0.696 | 0.6902 |
| 0.5834 | 0.12 | 3100 | 0.5456 | 0.699 | 0.6985 |
| 0.5834 | 0.12 | 3150 | 0.5592 | 0.675 | 0.6628 |
| 0.5834 | 0.12 | 3200 | 0.5442 | 0.69 | 0.6856 |
| 0.5834 | 0.12 | 3250 | 0.5424 | 0.698 | 0.6974 |
| 0.5834 | 0.13 | 3300 | 0.5464 | 0.691 | 0.6907 |
| 0.5834 | 0.13 | 3350 | 0.5433 | 0.693 | 0.6922 |
| 0.5834 | 0.13 | 3400 | 0.5400 | 0.746 | 0.7461 |
| 0.5834 | 0.13 | 3450 | 0.5406 | 0.712 | 0.7091 |
| 0.5551 | 0.13 | 3500 | 0.5367 | 0.738 | 0.7376 |
| 0.5551 | 0.14 | 3550 | 0.5354 | 0.713 | 0.7091 |
| 0.5551 | 0.14 | 3600 | 0.5377 | 0.74 | 0.7400 |
| 0.5551 | 0.14 | 3650 | 0.5342 | 0.751 | 0.7506 |
| 0.5551 | 0.14 | 3700 | 0.5386 | 0.701 | 0.6992 |
| 0.5551 | 0.14 | 3750 | 0.5395 | 0.737 | 0.7368 |
| 0.5551 | 0.15 | 3800 | 0.5333 | 0.733 | 0.7330 |
| 0.5551 | 0.15 | 3850 | 0.5245 | 0.737 | 0.7371 |
| 0.5551 | 0.15 | 3900 | 0.5236 | 0.745 | 0.7451 |
| 0.5551 | 0.15 | 3950 | 0.5149 | 0.741 | 0.7400 |
| 0.5508 | 0.15 | 4000 | 0.5208 | 0.743 | 0.7422 |
| 0.5508 | 0.16 | 4050 | 0.5109 | 0.744 | 0.7440 |
| 0.5508 | 0.16 | 4100 | 0.5179 | 0.742 | 0.7398 |
| 0.5508 | 0.16 | 4150 | 0.5133 | 0.75 | 0.7499 |
| 0.5508 | 0.16 | 4200 | 0.5110 | 0.744 | 0.7416 |
| 0.5508 | 0.16 | 4250 | 0.5133 | 0.749 | 0.7476 |
| 0.5508 | 0.16 | 4300 | 0.5075 | 0.743 | 0.7410 |
| 0.5508 | 0.17 | 4350 | 0.5108 | 0.755 | 0.7544 |
| 0.5508 | 0.17 | 4400 | 0.5051 | 0.747 | 0.7465 |
| 0.5508 | 0.17 | 4450 | 0.5064 | 0.746 | 0.7455 |
| 0.5362 | 0.17 | 4500 | 0.5030 | 0.744 | 0.7441 |
| 0.5362 | 0.17 | 4550 | 0.5043 | 0.748 | 0.7476 |
| 0.5362 | 0.18 | 4600 | 0.5010 | 0.753 | 0.7531 |
| 0.5362 | 0.18 | 4650 | 0.4988 | 0.762 | 0.7616 |
| 0.5362 | 0.18 | 4700 | 0.4999 | 0.755 | 0.7548 |
| 0.5362 | 0.18 | 4750 | 0.5159 | 0.754 | 0.7529 |
| 0.5362 | 0.18 | 4800 | 0.4924 | 0.764 | 0.7639 |
| 0.5362 | 0.19 | 4850 | 0.4935 | 0.755 | 0.7549 |
| 0.5362 | 0.19 | 4900 | 0.4874 | 0.76 | 0.7601 |
| 0.5362 | 0.19 | 4950 | 0.4859 | 0.759 | 0.7591 |
| 0.5226 | 0.19 | 5000 | 0.4901 | 0.761 | 0.7610 |
| 0.5226 | 0.19 | 5050 | 0.4740 | 0.779 | 0.7790 |
| 0.5226 | 0.2 | 5100 | 0.4799 | 0.783 | 0.7831 |
| 0.5226 | 0.2 | 5150 | 0.4833 | 0.771 | 0.7698 |
| 0.5226 | 0.2 | 5200 | 0.4879 | 0.759 | 0.7561 |
| 0.5226 | 0.2 | 5250 | 0.4812 | 0.772 | 0.7719 |
| 0.5226 | 0.2 | 5300 | 0.4825 | 0.772 | 0.7715 |
| 0.5226 | 0.2 | 5350 | 0.4791 | 0.775 | 0.7744 |
| 0.5226 | 0.21 | 5400 | 0.4749 | 0.773 | 0.7729 |
| 0.5226 | 0.21 | 5450 | 0.4691 | 0.782 | 0.7811 |
| 0.5055 | 0.21 | 5500 | 0.4752 | 0.78 | 0.7791 |
| 0.5055 | 0.21 | 5550 | 0.4621 | 0.766 | 0.7645 |
| 0.5055 | 0.21 | 5600 | 0.4628 | 0.779 | 0.7790 |
| 0.5055 | 0.22 | 5650 | 0.4543 | 0.776 | 0.7760 |
| 0.5055 | 0.22 | 5700 | 0.4548 | 0.786 | 0.7861 |
| 0.5055 | 0.22 | 5750 | 0.4578 | 0.777 | 0.7763 |
| 0.5055 | 0.22 | 5800 | 0.4684 | 0.778 | 0.7780 |
| 0.5055 | 0.22 | 5850 | 0.4626 | 0.775 | 0.7751 |
| 0.5055 | 0.23 | 5900 | 0.4714 | 0.785 | 0.7850 |
| 0.5055 | 0.23 | 5950 | 0.4514 | 0.79 | 0.7896 |
| 0.4985 | 0.23 | 6000 | 0.4541 | 0.773 | 0.7731 |
| 0.4985 | 0.23 | 6050 | 0.4587 | 0.788 | 0.7876 |
| 0.4985 | 0.23 | 6100 | 0.4523 | 0.787 | 0.7867 |
| 0.4985 | 0.24 | 6150 | 0.4441 | 0.787 | 0.7870 |
| 0.4985 | 0.24 | 6200 | 0.4529 | 0.784 | 0.7841 |
| 0.4985 | 0.24 | 6250 | 0.4512 | 0.784 | 0.7840 |
| 0.4985 | 0.24 | 6300 | 0.4545 | 0.777 | 0.7757 |
| 0.4985 | 0.24 | 6350 | 0.4399 | 0.788 | 0.7874 |
| 0.4985 | 0.25 | 6400 | 0.4478 | 0.794 | 0.7939 |
| 0.4985 | 0.25 | 6450 | 0.4495 | 0.793 | 0.7930 |
| 0.4937 | 0.25 | 6500 | 0.4454 | 0.792 | 0.7913 |
| 0.4937 | 0.25 | 6550 | 0.4438 | 0.795 | 0.7950 |
| 0.4937 | 0.25 | 6600 | 0.4476 | 0.795 | 0.7948 |
| 0.4937 | 0.25 | 6650 | 0.4448 | 0.794 | 0.7939 |
| 0.4937 | 0.26 | 6700 | 0.4472 | 0.791 | 0.7911 |
| 0.4937 | 0.26 | 6750 | 0.4431 | 0.793 | 0.7924 |
| 0.4937 | 0.26 | 6800 | 0.4434 | 0.796 | 0.7958 |
| 0.4937 | 0.26 | 6850 | 0.4340 | 0.802 | 0.802 |
| 0.4937 | 0.26 | 6900 | 0.4502 | 0.786 | 0.7848 |
| 0.4937 | 0.27 | 6950 | 0.4349 | 0.797 | 0.7964 |
| 0.4826 | 0.27 | 7000 | 0.4348 | 0.79 | 0.7894 |
| 0.4826 | 0.27 | 7050 | 0.4321 | 0.788 | 0.7875 |
| 0.4826 | 0.27 | 7100 | 0.4300 | 0.787 | 0.7868 |
| 0.4826 | 0.27 | 7150 | 0.4346 | 0.78 | 0.7779 |
| 0.4826 | 0.28 | 7200 | 0.4246 | 0.802 | 0.8020 |
| 0.4826 | 0.28 | 7250 | 0.4273 | 0.793 | 0.7930 |
| 0.4826 | 0.28 | 7300 | 0.4346 | 0.79 | 0.7894 |
| 0.4826 | 0.28 | 7350 | 0.4358 | 0.789 | 0.7887 |
| 0.4826 | 0.28 | 7400 | 0.4368 | 0.788 | 0.7871 |
| 0.4826 | 0.29 | 7450 | 0.4426 | 0.784 | 0.7841 |
| 0.4756 | 0.29 | 7500 | 0.4312 | 0.802 | 0.8019 |
| 0.4756 | 0.29 | 7550 | 0.4303 | 0.795 | 0.7944 |
| 0.4756 | 0.29 | 7600 | 0.4391 | 0.792 | 0.7916 |
| 0.4756 | 0.29 | 7650 | 0.4325 | 0.793 | 0.7922 |
| 0.4756 | 0.29 | 7700 | 0.4283 | 0.793 | 0.7920 |
| 0.4756 | 0.3 | 7750 | 0.4271 | 0.799 | 0.7991 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
daoyuanzhai/distilgpt2-finetuned-wikitext2
|
daoyuanzhai
| 2023-12-24T09:46:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T09:14:17Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7501 | 1.0 | 2334 | 3.6669 |
| 3.6498 | 2.0 | 4668 | 3.6464 |
| 3.5868 | 3.0 | 7002 | 3.6434 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Akshat4112/bloom_prompt_tuning_1703410758.7083309
|
Akshat4112
| 2023-12-24T09:39:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | 2023-12-24T09:39:46Z |
---
library_name: peft
base_model: bigscience/bloomz-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hkivancoral/hushem_40x_deit_base_sgd_00001_fold2
|
hkivancoral
| 2023-12-24T09:34:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T08:53:10Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.3111111111111111
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3898
- Accuracy: 0.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4168 | 1.0 | 215 | 1.4077 | 0.2444 |
| 1.3843 | 2.0 | 430 | 1.4068 | 0.2444 |
| 1.4045 | 3.0 | 645 | 1.4059 | 0.2444 |
| 1.3944 | 4.0 | 860 | 1.4051 | 0.2444 |
| 1.3979 | 5.0 | 1075 | 1.4043 | 0.2444 |
| 1.4212 | 6.0 | 1290 | 1.4036 | 0.2667 |
| 1.4197 | 7.0 | 1505 | 1.4029 | 0.2667 |
| 1.369 | 8.0 | 1720 | 1.4022 | 0.2667 |
| 1.3853 | 9.0 | 1935 | 1.4015 | 0.2667 |
| 1.4053 | 10.0 | 2150 | 1.4008 | 0.2667 |
| 1.3723 | 11.0 | 2365 | 1.4002 | 0.2667 |
| 1.3571 | 12.0 | 2580 | 1.3996 | 0.2667 |
| 1.3936 | 13.0 | 2795 | 1.3990 | 0.2667 |
| 1.3779 | 14.0 | 3010 | 1.3985 | 0.2667 |
| 1.3861 | 15.0 | 3225 | 1.3979 | 0.2667 |
| 1.4005 | 16.0 | 3440 | 1.3974 | 0.2889 |
| 1.3769 | 17.0 | 3655 | 1.3969 | 0.2889 |
| 1.3909 | 18.0 | 3870 | 1.3964 | 0.2889 |
| 1.3834 | 19.0 | 4085 | 1.3960 | 0.2889 |
| 1.3642 | 20.0 | 4300 | 1.3956 | 0.2889 |
| 1.3863 | 21.0 | 4515 | 1.3951 | 0.2889 |
| 1.3863 | 22.0 | 4730 | 1.3947 | 0.2889 |
| 1.3703 | 23.0 | 4945 | 1.3944 | 0.2889 |
| 1.3733 | 24.0 | 5160 | 1.3940 | 0.2889 |
| 1.3751 | 25.0 | 5375 | 1.3937 | 0.3111 |
| 1.3799 | 26.0 | 5590 | 1.3933 | 0.3111 |
| 1.3637 | 27.0 | 5805 | 1.3930 | 0.3111 |
| 1.3658 | 28.0 | 6020 | 1.3927 | 0.3111 |
| 1.3837 | 29.0 | 6235 | 1.3924 | 0.3111 |
| 1.3573 | 30.0 | 6450 | 1.3922 | 0.3111 |
| 1.3483 | 31.0 | 6665 | 1.3919 | 0.3111 |
| 1.3737 | 32.0 | 6880 | 1.3917 | 0.3111 |
| 1.3567 | 33.0 | 7095 | 1.3915 | 0.3111 |
| 1.3764 | 34.0 | 7310 | 1.3913 | 0.3111 |
| 1.3646 | 35.0 | 7525 | 1.3911 | 0.3111 |
| 1.3557 | 36.0 | 7740 | 1.3909 | 0.3111 |
| 1.3829 | 37.0 | 7955 | 1.3907 | 0.3111 |
| 1.3713 | 38.0 | 8170 | 1.3906 | 0.3111 |
| 1.3468 | 39.0 | 8385 | 1.3905 | 0.3111 |
| 1.3527 | 40.0 | 8600 | 1.3903 | 0.3111 |
| 1.3629 | 41.0 | 8815 | 1.3902 | 0.3111 |
| 1.3464 | 42.0 | 9030 | 1.3901 | 0.3111 |
| 1.3709 | 43.0 | 9245 | 1.3901 | 0.3111 |
| 1.3524 | 44.0 | 9460 | 1.3900 | 0.3111 |
| 1.3532 | 45.0 | 9675 | 1.3899 | 0.3111 |
| 1.3657 | 46.0 | 9890 | 1.3899 | 0.3111 |
| 1.3891 | 47.0 | 10105 | 1.3899 | 0.3111 |
| 1.3666 | 48.0 | 10320 | 1.3898 | 0.3111 |
| 1.3713 | 49.0 | 10535 | 1.3898 | 0.3111 |
| 1.3614 | 50.0 | 10750 | 1.3898 | 0.3111 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Anwaarma/Merged-Server-praj
|
Anwaarma
| 2023-12-24T09:30:05Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Anwaarma/BP-server02andinteger03",
"base_model:finetune:Anwaarma/BP-server02andinteger03",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T08:29:19Z |
---
base_model: Anwaarma/BP-server02andinteger03
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Merged-Server-praj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Merged-Server-praj
This model is a fine-tuned version of [Anwaarma/BP-server02andinteger03](https://huggingface.co/Anwaarma/BP-server02andinteger03) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6453
- Accuracy: 0.619
- F1: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 0.6887 | 0.518 | 0.4747 |
| No log | 0.0 | 100 | 0.6872 | 0.539 | 0.4913 |
| No log | 0.01 | 150 | 0.6870 | 0.556 | 0.5124 |
| No log | 0.01 | 200 | 0.6845 | 0.554 | 0.5490 |
| No log | 0.01 | 250 | 0.6823 | 0.555 | 0.5509 |
| No log | 0.01 | 300 | 0.6818 | 0.545 | 0.5339 |
| No log | 0.02 | 350 | 0.6841 | 0.561 | 0.5207 |
| No log | 0.02 | 400 | 0.6845 | 0.566 | 0.5402 |
| No log | 0.02 | 450 | 0.6792 | 0.566 | 0.5650 |
| 0.6886 | 0.02 | 500 | 0.6808 | 0.532 | 0.4867 |
| 0.6886 | 0.02 | 550 | 0.6821 | 0.527 | 0.4643 |
| 0.6886 | 0.03 | 600 | 0.6777 | 0.577 | 0.5729 |
| 0.6886 | 0.03 | 650 | 0.6758 | 0.58 | 0.58 |
| 0.6886 | 0.03 | 700 | 0.6762 | 0.585 | 0.5793 |
| 0.6886 | 0.03 | 750 | 0.6743 | 0.577 | 0.5731 |
| 0.6886 | 0.04 | 800 | 0.6731 | 0.577 | 0.5762 |
| 0.6886 | 0.04 | 850 | 0.6762 | 0.582 | 0.5534 |
| 0.6886 | 0.04 | 900 | 0.6749 | 0.568 | 0.5458 |
| 0.6886 | 0.04 | 950 | 0.6739 | 0.578 | 0.5746 |
| 0.6819 | 0.04 | 1000 | 0.6715 | 0.588 | 0.588 |
| 0.6819 | 0.05 | 1050 | 0.6741 | 0.579 | 0.5524 |
| 0.6819 | 0.05 | 1100 | 0.6712 | 0.592 | 0.5894 |
| 0.6819 | 0.05 | 1150 | 0.6692 | 0.594 | 0.5905 |
| 0.6819 | 0.05 | 1200 | 0.6683 | 0.597 | 0.5957 |
| 0.6819 | 0.06 | 1250 | 0.6757 | 0.566 | 0.5209 |
| 0.6819 | 0.06 | 1300 | 0.6676 | 0.6 | 0.5882 |
| 0.6819 | 0.06 | 1350 | 0.6655 | 0.598 | 0.5943 |
| 0.6819 | 0.06 | 1400 | 0.6671 | 0.584 | 0.5724 |
| 0.6819 | 0.06 | 1450 | 0.6631 | 0.587 | 0.5812 |
| 0.6763 | 0.07 | 1500 | 0.6614 | 0.62 | 0.6140 |
| 0.6763 | 0.07 | 1550 | 0.6610 | 0.603 | 0.6018 |
| 0.6763 | 0.07 | 1600 | 0.6615 | 0.589 | 0.5843 |
| 0.6763 | 0.07 | 1650 | 0.6597 | 0.633 | 0.6326 |
| 0.6763 | 0.08 | 1700 | 0.6561 | 0.607 | 0.6025 |
| 0.6763 | 0.08 | 1750 | 0.6515 | 0.632 | 0.6318 |
| 0.6763 | 0.08 | 1800 | 0.6509 | 0.635 | 0.6323 |
| 0.6763 | 0.08 | 1850 | 0.6538 | 0.604 | 0.5903 |
| 0.6763 | 0.08 | 1900 | 0.6483 | 0.632 | 0.632 |
| 0.6763 | 0.09 | 1950 | 0.6513 | 0.616 | 0.6142 |
| 0.6659 | 0.09 | 2000 | 0.6475 | 0.618 | 0.6166 |
| 0.6659 | 0.09 | 2050 | 0.6443 | 0.613 | 0.6106 |
| 0.6659 | 0.09 | 2100 | 0.6482 | 0.629 | 0.6261 |
| 0.6659 | 0.1 | 2150 | 0.6552 | 0.604 | 0.5837 |
| 0.6659 | 0.1 | 2200 | 0.6492 | 0.618 | 0.6042 |
| 0.6659 | 0.1 | 2250 | 0.6441 | 0.63 | 0.6256 |
| 0.6659 | 0.1 | 2300 | 0.6447 | 0.63 | 0.6293 |
| 0.6659 | 0.1 | 2350 | 0.6429 | 0.633 | 0.6317 |
| 0.6659 | 0.11 | 2400 | 0.6450 | 0.62 | 0.6200 |
| 0.6659 | 0.11 | 2450 | 0.6480 | 0.639 | 0.6358 |
| 0.6577 | 0.11 | 2500 | 0.6480 | 0.601 | 0.5844 |
| 0.6577 | 0.11 | 2550 | 0.6422 | 0.641 | 0.6402 |
| 0.6577 | 0.11 | 2600 | 0.6407 | 0.623 | 0.6177 |
| 0.6577 | 0.12 | 2650 | 0.6393 | 0.634 | 0.6336 |
| 0.6577 | 0.12 | 2700 | 0.6405 | 0.635 | 0.6320 |
| 0.6577 | 0.12 | 2750 | 0.6422 | 0.642 | 0.6413 |
| 0.6577 | 0.12 | 2800 | 0.6355 | 0.632 | 0.6271 |
| 0.6577 | 0.13 | 2850 | 0.6351 | 0.622 | 0.6150 |
| 0.6577 | 0.13 | 2900 | 0.6331 | 0.658 | 0.6569 |
| 0.6577 | 0.13 | 2950 | 0.6352 | 0.656 | 0.6548 |
| 0.6569 | 0.13 | 3000 | 0.6385 | 0.609 | 0.5927 |
| 0.6569 | 0.13 | 3050 | 0.6307 | 0.645 | 0.6449 |
| 0.6569 | 0.14 | 3100 | 0.6304 | 0.639 | 0.6324 |
| 0.6569 | 0.14 | 3150 | 0.6268 | 0.654 | 0.6540 |
| 0.6569 | 0.14 | 3200 | 0.6297 | 0.641 | 0.6402 |
| 0.6569 | 0.14 | 3250 | 0.6293 | 0.651 | 0.6488 |
| 0.6569 | 0.15 | 3300 | 0.6251 | 0.66 | 0.66 |
| 0.6569 | 0.15 | 3350 | 0.6207 | 0.653 | 0.6513 |
| 0.6569 | 0.15 | 3400 | 0.6184 | 0.649 | 0.6470 |
| 0.6569 | 0.15 | 3450 | 0.6176 | 0.653 | 0.6523 |
| 0.6394 | 0.15 | 3500 | 0.6195 | 0.654 | 0.6527 |
| 0.6394 | 0.16 | 3550 | 0.6172 | 0.655 | 0.6535 |
| 0.6394 | 0.16 | 3600 | 0.6168 | 0.659 | 0.6590 |
| 0.6394 | 0.16 | 3650 | 0.6138 | 0.664 | 0.6631 |
| 0.6394 | 0.16 | 3700 | 0.6228 | 0.659 | 0.6478 |
| 0.6394 | 0.17 | 3750 | 0.6117 | 0.662 | 0.6608 |
| 0.6394 | 0.17 | 3800 | 0.6121 | 0.645 | 0.6443 |
| 0.6394 | 0.17 | 3850 | 0.6099 | 0.652 | 0.6515 |
| 0.6394 | 0.17 | 3900 | 0.6128 | 0.657 | 0.6515 |
| 0.6394 | 0.17 | 3950 | 0.6103 | 0.65 | 0.6499 |
| 0.6366 | 0.18 | 4000 | 0.6149 | 0.652 | 0.6447 |
| 0.6366 | 0.18 | 4050 | 0.6111 | 0.652 | 0.6518 |
| 0.6366 | 0.18 | 4100 | 0.6098 | 0.651 | 0.6502 |
| 0.6366 | 0.18 | 4150 | 0.6072 | 0.666 | 0.6640 |
| 0.6366 | 0.19 | 4200 | 0.6065 | 0.669 | 0.6646 |
| 0.6366 | 0.19 | 4250 | 0.6057 | 0.658 | 0.6531 |
| 0.6366 | 0.19 | 4300 | 0.6044 | 0.667 | 0.6634 |
| 0.6366 | 0.19 | 4350 | 0.6061 | 0.656 | 0.6451 |
| 0.6366 | 0.19 | 4400 | 0.5962 | 0.666 | 0.6660 |
| 0.6366 | 0.2 | 4450 | 0.5936 | 0.664 | 0.6637 |
| 0.6275 | 0.2 | 4500 | 0.5978 | 0.651 | 0.6508 |
| 0.6275 | 0.2 | 4550 | 0.5998 | 0.664 | 0.6601 |
| 0.6275 | 0.2 | 4600 | 0.5967 | 0.657 | 0.6560 |
| 0.6275 | 0.21 | 4650 | 0.5931 | 0.671 | 0.6708 |
| 0.6275 | 0.21 | 4700 | 0.5953 | 0.686 | 0.6856 |
| 0.6275 | 0.21 | 4750 | 0.6119 | 0.643 | 0.6331 |
| 0.6275 | 0.21 | 4800 | 0.6035 | 0.657 | 0.6497 |
| 0.6275 | 0.21 | 4850 | 0.5864 | 0.676 | 0.6756 |
| 0.6275 | 0.22 | 4900 | 0.5876 | 0.681 | 0.6809 |
| 0.6275 | 0.22 | 4950 | 0.5864 | 0.677 | 0.6762 |
| 0.6279 | 0.22 | 5000 | 0.5876 | 0.677 | 0.6770 |
| 0.6279 | 0.22 | 5050 | 0.5876 | 0.666 | 0.6636 |
| 0.6279 | 0.23 | 5100 | 0.5848 | 0.675 | 0.6745 |
| 0.6279 | 0.23 | 5150 | 0.5904 | 0.672 | 0.6719 |
| 0.6279 | 0.23 | 5200 | 0.5854 | 0.672 | 0.6718 |
| 0.6279 | 0.23 | 5250 | 0.5864 | 0.673 | 0.6709 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
mcysqrd/mojo-coder-1B
|
mcysqrd
| 2023-12-24T09:25:24Z | 7 | 6 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:mcysqrd/mojo_code",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T08:47:05Z |
---
license: apache-2.0
datasets:
- mcysqrd/mojo_code
---
FIM training over deepseek-coder-1.3B using a mojo-code dataset.
This is an alpha version. It is trained only for FIM co-pilot style usage.
later versions should have Q&A added as well as better performance. please leave your comments to help improve it.
the recipe for this was based on this template from https://huggingface.co/blog/personal-copilot
```
tokenizer = AutoTokenizer.from_pretrained(merged_model_path,trust_remote_code=True,use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
merged_model_path,
device_map={"": 0},
use_cache=True,
trust_remote_code=True,
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16
)
input_text = """<|fim▁begin|>
from algorithm import parallelize, vectorize
from benchmark import Benchmark
from complex import ComplexSIMD, ComplexFloat64
from math import iota
from os import env
from python import Python
from python.object import PythonObject
from runtime.llcl import num_cores, Runtime
from tensor import Tensor
from utils.index import Index
alias float_type = DType.float64
alias simd_width = simdwidthof[float_type]()
alias width = 960
alias height = 960
alias MAX_ITERS = 200
alias min_x = -2.0
alias max_x = 0.6
alias min_y = -1.5
alias max_y = 1.5
fn mandelbrot_kernel_SIMD[
simd_width: Int
](c: ComplexSIMD[float_type, simd_width]) -> SIMD[float_type, simd_width]:
let cx = c.re
let cy = c.im
var x = SIMD[float_type, simd_width](0)
var y = SIMD[float_type, simd_width](0)
var y2 = SIMD[float_type, simd_width](0)
var iters = SIMD[float_type, simd_width](0)
var t: SIMD[DType.bool, simd_width] = True
for i in range(MAX_ITERS):
if not t.reduce_or():
break
y2 = y*y
y = x.fma(y + y, cy)
t = x.fma(x, y2) <= 4
x = x.fma(x, cx - y2)
iters = t.select(iters + 1, iters)
return iters
fn compare():
let t = Tensor[float_type](height, width)
@parameter
fn worker(row: Int):
let scale_x = (max_x - min_x) / width
let scale_y = (max_y - min_y) / height
<|fim▁hole|>
fn main():
compare()
<|fim▁end|>"""
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=547+200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)[len(input_text):])
def stream(user_prompt):
runtimeFlag = "cuda:0"
inputs = tokenizer([user_prompt], return_tensors="pt").to(runtimeFlag)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
_ = model.generate(**inputs, streamer=streamer, max_new_tokens=200)
stream(input_text)
```
also try to use an inference endpoint and use a VS-Code extension
|
Adalee1001/my_awesome_mind_model
|
Adalee1001
| 2023-12-24T09:12:46Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-12-24T07:51:08Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | nan | 0.0442 |
| No log | 1.87 | 7 | nan | 0.0442 |
| 241.8072 | 2.93 | 11 | nan | 0.0442 |
| 241.8072 | 4.0 | 15 | nan | 0.0442 |
| 241.8072 | 4.8 | 18 | nan | 0.0442 |
| 0.0 | 5.87 | 22 | nan | 0.0442 |
| 0.0 | 6.93 | 26 | nan | 0.0442 |
| 0.0 | 8.0 | 30 | nan | 0.0442 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
LeamonLee/lilt-en-funsd
|
LeamonLee
| 2023-12-24T09:12:26Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-24T09:12:05Z |
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6973
- Answer: {'precision': 0.8658109684947491, 'recall': 0.9082007343941249, 'f1': 0.886499402628435, 'number': 817}
- Header: {'precision': 0.6770833333333334, 'recall': 0.5462184873949579, 'f1': 0.6046511627906976, 'number': 119}
- Question: {'precision': 0.9074243813015582, 'recall': 0.9192200557103064, 'f1': 0.9132841328413284, 'number': 1077}
- Overall Precision: 0.8792
- Overall Recall: 0.8927
- Overall F1: 0.8859
- Overall Accuracy: 0.8011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1857 | 26.32 | 500 | 1.4181 | {'precision': 0.8298109010011123, 'recall': 0.9130966952264382, 'f1': 0.8694638694638694, 'number': 817} | {'precision': 0.6923076923076923, 'recall': 0.5294117647058824, 'f1': 0.5999999999999999, 'number': 119} | {'precision': 0.886672710788758, 'recall': 0.9080779944289693, 'f1': 0.8972477064220182, 'number': 1077} | 0.8538 | 0.8877 | 0.8704 | 0.7981 |
| 0.0068 | 52.63 | 1000 | 1.6084 | {'precision': 0.8581235697940504, 'recall': 0.9179926560587516, 'f1': 0.8870490833826139, 'number': 817} | {'precision': 0.5877192982456141, 'recall': 0.5630252100840336, 'f1': 0.5751072961373391, 'number': 119} | {'precision': 0.9083255378858747, 'recall': 0.9015784586815228, 'f1': 0.9049394221808015, 'number': 1077} | 0.8692 | 0.8882 | 0.8786 | 0.7956 |
| 0.0018 | 78.95 | 1500 | 1.6068 | {'precision': 0.8742655699177438, 'recall': 0.9106487148102815, 'f1': 0.8920863309352519, 'number': 817} | {'precision': 0.6050420168067226, 'recall': 0.6050420168067226, 'f1': 0.6050420168067226, 'number': 119} | {'precision': 0.902867715078631, 'recall': 0.9062209842154132, 'f1': 0.9045412418906396, 'number': 1077} | 0.8737 | 0.8902 | 0.8819 | 0.8095 |
| 0.0007 | 105.26 | 2000 | 1.6522 | {'precision': 0.8611111111111112, 'recall': 0.9106487148102815, 'f1': 0.8851873884592504, 'number': 817} | {'precision': 0.6126126126126126, 'recall': 0.5714285714285714, 'f1': 0.591304347826087, 'number': 119} | {'precision': 0.9098513011152416, 'recall': 0.9090064995357474, 'f1': 0.9094287041337669, 'number': 1077} | 0.8732 | 0.8897 | 0.8814 | 0.8028 |
| 0.0002 | 131.58 | 2500 | 1.6973 | {'precision': 0.8658109684947491, 'recall': 0.9082007343941249, 'f1': 0.886499402628435, 'number': 817} | {'precision': 0.6770833333333334, 'recall': 0.5462184873949579, 'f1': 0.6046511627906976, 'number': 119} | {'precision': 0.9074243813015582, 'recall': 0.9192200557103064, 'f1': 0.9132841328413284, 'number': 1077} | 0.8792 | 0.8927 | 0.8859 | 0.8011 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_RandomError0.2_Seed104
|
behzadnet
| 2023-12-24T09:04:58Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T09:04:53Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_RandomError0.2_Seed104
|
behzadnet
| 2023-12-24T09:04:47Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2023-12-24T09:04:41Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
SJTU-CL/RoBERTa-large-ArguGPT
|
SJTU-CL
| 2023-12-24T09:04:06Z | 20 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"ArguGPT",
"en",
"dataset:SJTU-CL/ArguGPT",
"arxiv:2304.07666",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-07T06:54:49Z |
---
language:
- en
thumbnail: url to a thumbnail used in social sharing
tags:
- ArguGPT
license: mit
datasets:
- SJTU-CL/ArguGPT
metrics:
- accuracy
pipeline_tag: text-classification
---
# ArguGPT
RoBERTa-large finetuned on ArguGPT essays.
- label 1 for machine generated essays
- label 0 for human written essays
**Please truncate your input essay to 512 tokens**
## Citation
Please cite our work [arXiv:2304.07666](https://arxiv.org/abs/2304.07666) as
```
@misc{liu2023argugpt,
title={ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models},
author={Yikang Liu and Ziyin Zhang and Wanyang Zhang and Shisen Yue and Xiaojing Zhao and Xinyuan Cheng and Yiwen Zhang and Hai Hu},
year={2023},
eprint={2304.07666},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SJTU-CL/RoBERTa-large-ArguGPT-sent
|
SJTU-CL
| 2023-12-24T09:03:24Z | 24 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"AIGC for education",
"en",
"dataset:SJTU-CL/ArguGPT",
"arxiv:2304.07666",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-04-17T00:45:43Z |
---
datasets:
- SJTU-CL/ArguGPT
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- AIGC for education
---
## Citation
Please cite our work [arXiv:2304.07666](https://arxiv.org/abs/2304.07666) as
```
@misc{liu2023argugpt,
title={ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models},
author={Yikang Liu and Ziyin Zhang and Wanyang Zhang and Shisen Yue and Xiaojing Zhao and Xinyuan Cheng and Yiwen Zhang and Hai Hu},
year={2023},
eprint={2304.07666},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
hkivancoral/hushem_40x_deit_base_sgd_00001_fold1
|
hkivancoral
| 2023-12-24T08:53:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T08:11:42Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.28888888888888886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4455
- Accuracy: 0.2889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.379 | 1.0 | 215 | 1.4681 | 0.2889 |
| 1.3967 | 2.0 | 430 | 1.4670 | 0.2889 |
| 1.423 | 3.0 | 645 | 1.4660 | 0.2889 |
| 1.4018 | 4.0 | 860 | 1.4650 | 0.2889 |
| 1.3899 | 5.0 | 1075 | 1.4640 | 0.2889 |
| 1.4076 | 6.0 | 1290 | 1.4631 | 0.2889 |
| 1.3743 | 7.0 | 1505 | 1.4622 | 0.2889 |
| 1.3724 | 8.0 | 1720 | 1.4613 | 0.2889 |
| 1.3757 | 9.0 | 1935 | 1.4604 | 0.2889 |
| 1.3783 | 10.0 | 2150 | 1.4596 | 0.2889 |
| 1.4141 | 11.0 | 2365 | 1.4589 | 0.2889 |
| 1.3702 | 12.0 | 2580 | 1.4581 | 0.2889 |
| 1.3842 | 13.0 | 2795 | 1.4574 | 0.2889 |
| 1.3926 | 14.0 | 3010 | 1.4567 | 0.2889 |
| 1.3764 | 15.0 | 3225 | 1.4560 | 0.2889 |
| 1.3955 | 16.0 | 3440 | 1.4553 | 0.2889 |
| 1.3752 | 17.0 | 3655 | 1.4547 | 0.2889 |
| 1.3872 | 18.0 | 3870 | 1.4541 | 0.2889 |
| 1.3795 | 19.0 | 4085 | 1.4535 | 0.2889 |
| 1.3768 | 20.0 | 4300 | 1.4530 | 0.2889 |
| 1.3609 | 21.0 | 4515 | 1.4524 | 0.2889 |
| 1.3552 | 22.0 | 4730 | 1.4519 | 0.2889 |
| 1.3869 | 23.0 | 4945 | 1.4514 | 0.2889 |
| 1.3741 | 24.0 | 5160 | 1.4510 | 0.2889 |
| 1.3721 | 25.0 | 5375 | 1.4505 | 0.2889 |
| 1.3593 | 26.0 | 5590 | 1.4501 | 0.2889 |
| 1.3536 | 27.0 | 5805 | 1.4497 | 0.2889 |
| 1.3543 | 28.0 | 6020 | 1.4493 | 0.2889 |
| 1.3589 | 29.0 | 6235 | 1.4489 | 0.2889 |
| 1.3445 | 30.0 | 6450 | 1.4486 | 0.2889 |
| 1.3539 | 31.0 | 6665 | 1.4483 | 0.2889 |
| 1.3535 | 32.0 | 6880 | 1.4480 | 0.2889 |
| 1.3498 | 33.0 | 7095 | 1.4477 | 0.2889 |
| 1.3497 | 34.0 | 7310 | 1.4474 | 0.2889 |
| 1.3582 | 35.0 | 7525 | 1.4472 | 0.2889 |
| 1.354 | 36.0 | 7740 | 1.4469 | 0.2889 |
| 1.3681 | 37.0 | 7955 | 1.4467 | 0.2889 |
| 1.346 | 38.0 | 8170 | 1.4465 | 0.2889 |
| 1.3468 | 39.0 | 8385 | 1.4463 | 0.2889 |
| 1.3488 | 40.0 | 8600 | 1.4462 | 0.2889 |
| 1.3542 | 41.0 | 8815 | 1.4460 | 0.2889 |
| 1.3813 | 42.0 | 9030 | 1.4459 | 0.2889 |
| 1.3585 | 43.0 | 9245 | 1.4458 | 0.2889 |
| 1.3347 | 44.0 | 9460 | 1.4457 | 0.2889 |
| 1.3527 | 45.0 | 9675 | 1.4456 | 0.2889 |
| 1.3601 | 46.0 | 9890 | 1.4456 | 0.2889 |
| 1.3484 | 47.0 | 10105 | 1.4455 | 0.2889 |
| 1.3543 | 48.0 | 10320 | 1.4455 | 0.2889 |
| 1.3639 | 49.0 | 10535 | 1.4455 | 0.2889 |
| 1.3697 | 50.0 | 10750 | 1.4455 | 0.2889 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_tiny_adamax_00001_fold1
|
hkivancoral
| 2023-12-24T08:44:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T08:08:23Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_adamax_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7777777777777778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7285
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3612 | 1.0 | 215 | 0.8053 | 0.7111 |
| 0.0954 | 2.0 | 430 | 0.7023 | 0.7111 |
| 0.0263 | 3.0 | 645 | 0.7672 | 0.7333 |
| 0.0113 | 4.0 | 860 | 0.8377 | 0.7778 |
| 0.0012 | 5.0 | 1075 | 0.9748 | 0.7778 |
| 0.0007 | 6.0 | 1290 | 0.9997 | 0.7778 |
| 0.0004 | 7.0 | 1505 | 1.1150 | 0.7556 |
| 0.0003 | 8.0 | 1720 | 1.1439 | 0.7556 |
| 0.0002 | 9.0 | 1935 | 1.2019 | 0.7556 |
| 0.0001 | 10.0 | 2150 | 1.2424 | 0.7556 |
| 0.0001 | 11.0 | 2365 | 1.2284 | 0.7556 |
| 0.0001 | 12.0 | 2580 | 1.2809 | 0.7556 |
| 0.0001 | 13.0 | 2795 | 1.3071 | 0.7556 |
| 0.0001 | 14.0 | 3010 | 1.3721 | 0.7556 |
| 0.0 | 15.0 | 3225 | 1.3804 | 0.7556 |
| 0.0 | 16.0 | 3440 | 1.3850 | 0.7556 |
| 0.0 | 17.0 | 3655 | 1.4005 | 0.7556 |
| 0.0 | 18.0 | 3870 | 1.4317 | 0.7556 |
| 0.0 | 19.0 | 4085 | 1.4823 | 0.7556 |
| 0.0 | 20.0 | 4300 | 1.4810 | 0.7556 |
| 0.0 | 21.0 | 4515 | 1.4751 | 0.7556 |
| 0.0 | 22.0 | 4730 | 1.5073 | 0.7556 |
| 0.0 | 23.0 | 4945 | 1.5283 | 0.7333 |
| 0.0 | 24.0 | 5160 | 1.5592 | 0.7556 |
| 0.0 | 25.0 | 5375 | 1.5298 | 0.7556 |
| 0.0 | 26.0 | 5590 | 1.5228 | 0.7778 |
| 0.0 | 27.0 | 5805 | 1.5617 | 0.7556 |
| 0.0 | 28.0 | 6020 | 1.5609 | 0.7778 |
| 0.0 | 29.0 | 6235 | 1.5791 | 0.7556 |
| 0.0 | 30.0 | 6450 | 1.6043 | 0.7778 |
| 0.0 | 31.0 | 6665 | 1.6159 | 0.7556 |
| 0.0 | 32.0 | 6880 | 1.6584 | 0.7556 |
| 0.0 | 33.0 | 7095 | 1.6250 | 0.7778 |
| 0.0 | 34.0 | 7310 | 1.6097 | 0.7778 |
| 0.0 | 35.0 | 7525 | 1.6615 | 0.7778 |
| 0.0 | 36.0 | 7740 | 1.6489 | 0.7778 |
| 0.0 | 37.0 | 7955 | 1.6559 | 0.7778 |
| 0.0 | 38.0 | 8170 | 1.6854 | 0.7778 |
| 0.0 | 39.0 | 8385 | 1.6826 | 0.7778 |
| 0.0 | 40.0 | 8600 | 1.7344 | 0.7333 |
| 0.0 | 41.0 | 8815 | 1.7007 | 0.7778 |
| 0.0 | 42.0 | 9030 | 1.6800 | 0.7778 |
| 0.0 | 43.0 | 9245 | 1.7149 | 0.7778 |
| 0.0 | 44.0 | 9460 | 1.7189 | 0.7556 |
| 0.0 | 45.0 | 9675 | 1.7288 | 0.7778 |
| 0.0 | 46.0 | 9890 | 1.7097 | 0.7778 |
| 0.0 | 47.0 | 10105 | 1.7285 | 0.7778 |
| 0.0 | 48.0 | 10320 | 1.7184 | 0.7778 |
| 0.0 | 49.0 | 10535 | 1.7322 | 0.7778 |
| 0.0 | 50.0 | 10750 | 1.7285 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
dishhaa03/beautiful-roses
|
dishhaa03
| 2023-12-24T08:24:09Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-24T08:22:07Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Beautiful-roses Dreambooth model trained by dishhaa03 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: CS21B1033
Sample pictures of this concept:

|
stablediffusionapi/pmxl
|
stablediffusionapi
| 2023-12-24T08:08:33Z | 0 | 1 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-10-16T20:06:46Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# PMXL API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "pmxl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/pmxl)
Model link: [View model](https://modelslab.com/models/pmxl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "pmxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
Anwaarma/Merged-Int-praj
|
Anwaarma
| 2023-12-24T08:07:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T08:06:37Z |
---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Merged-Int-praj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Merged-Int-praj
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1460
- Accuracy: 0.96
- F1: 0.9600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.0 | 50 | 0.6933 | 0.5 | 0.3333 |
| No log | 0.01 | 100 | 0.6929 | 0.58 | 0.4900 |
| No log | 0.01 | 150 | 0.6937 | 0.5 | 0.3333 |
| No log | 0.01 | 200 | 0.6951 | 0.5 | 0.3333 |
| No log | 0.02 | 250 | 0.6902 | 0.52 | 0.5130 |
| No log | 0.02 | 300 | 0.6909 | 0.5 | 0.3333 |
| No log | 0.02 | 350 | 0.6795 | 0.56 | 0.4762 |
| No log | 0.03 | 400 | 0.6524 | 0.61 | 0.6010 |
| No log | 0.03 | 450 | 0.6139 | 0.71 | 0.7100 |
| 0.6779 | 0.03 | 500 | 0.5827 | 0.71 | 0.7033 |
| 0.6779 | 0.04 | 550 | 0.5732 | 0.71 | 0.7033 |
| 0.6779 | 0.04 | 600 | 0.5467 | 0.74 | 0.7396 |
| 0.6779 | 0.04 | 650 | 0.5174 | 0.8 | 0.7980 |
| 0.6779 | 0.05 | 700 | 0.5193 | 0.74 | 0.7399 |
| 0.6779 | 0.05 | 750 | 0.4905 | 0.8 | 0.7980 |
| 0.6779 | 0.05 | 800 | 0.4710 | 0.8 | 0.7980 |
| 0.6779 | 0.06 | 850 | 0.4523 | 0.83 | 0.8271 |
| 0.6779 | 0.06 | 900 | 0.4373 | 0.84 | 0.8368 |
| 0.6779 | 0.06 | 950 | 0.4214 | 0.84 | 0.8368 |
| 0.5615 | 0.07 | 1000 | 0.4086 | 0.84 | 0.8368 |
| 0.5615 | 0.07 | 1050 | 0.3803 | 0.84 | 0.8368 |
| 0.5615 | 0.07 | 1100 | 0.3476 | 0.9 | 0.8994 |
| 0.5615 | 0.08 | 1150 | 0.3218 | 0.91 | 0.9096 |
| 0.5615 | 0.08 | 1200 | 0.3028 | 0.91 | 0.9096 |
| 0.5615 | 0.08 | 1250 | 0.2851 | 0.92 | 0.9195 |
| 0.5615 | 0.09 | 1300 | 0.2737 | 0.92 | 0.9195 |
| 0.5615 | 0.09 | 1350 | 0.2637 | 0.91 | 0.9096 |
| 0.5615 | 0.09 | 1400 | 0.2560 | 0.92 | 0.9195 |
| 0.5615 | 0.1 | 1450 | 0.2426 | 0.92 | 0.9199 |
| 0.4267 | 0.1 | 1500 | 0.2390 | 0.89 | 0.8897 |
| 0.4267 | 0.1 | 1550 | 0.2320 | 0.92 | 0.9199 |
| 0.4267 | 0.11 | 1600 | 0.2239 | 0.93 | 0.9298 |
| 0.4267 | 0.11 | 1650 | 0.2159 | 0.94 | 0.9398 |
| 0.4267 | 0.11 | 1700 | 0.2156 | 0.93 | 0.9298 |
| 0.4267 | 0.12 | 1750 | 0.2079 | 0.93 | 0.9298 |
| 0.4267 | 0.12 | 1800 | 0.1938 | 0.93 | 0.9298 |
| 0.4267 | 0.12 | 1850 | 0.1909 | 0.93 | 0.9298 |
| 0.4267 | 0.13 | 1900 | 0.1923 | 0.93 | 0.9298 |
| 0.4267 | 0.13 | 1950 | 0.1893 | 0.94 | 0.9398 |
| 0.3491 | 0.13 | 2000 | 0.1633 | 0.96 | 0.9600 |
| 0.3491 | 0.14 | 2050 | 0.1662 | 0.95 | 0.9500 |
| 0.3491 | 0.14 | 2100 | 0.1494 | 0.96 | 0.9600 |
| 0.3491 | 0.14 | 2150 | 0.1606 | 0.95 | 0.9499 |
| 0.3491 | 0.15 | 2200 | 0.1595 | 0.96 | 0.9599 |
| 0.3491 | 0.15 | 2250 | 0.1460 | 0.96 | 0.9600 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
boracious/llama-2-7b-finetune-korquad
|
boracious
| 2023-12-24T08:07:17Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T08:07:11Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
TingTing0104/distilbert-base-uncased-finetuned-ag_news
|
TingTing0104
| 2023-12-24T08:05:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-24T05:37:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-ag_news
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9443421052631579
- name: F1
type: f1
value: 0.9444316710321217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ag_news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1691
- Accuracy: 0.9443
- F1: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2253 | 1.0 | 3750 | 0.1749 | 0.9411 | 0.9413 |
| 0.1335 | 2.0 | 7500 | 0.1691 | 0.9443 | 0.9444 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
linhtran92/original-epiCPhotoGasm-no-prior-2400steps-1e-6
|
linhtran92
| 2023-12-24T08:05:05Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:Yntec/epiCPhotoGasm",
"base_model:finetune:Yntec/epiCPhotoGasm",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-24T07:14:17Z |
---
license: creativeml-openrail-m
base_model: Yntec/epiCPhotoGasm
instance_prompt: photo of sks man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - linhtran92/original-epiCPhotoGasm-no-prior-2400steps-1e-6
This is a dreambooth model derived from Yntec/epiCPhotoGasm. The weights were trained on photo of sks man using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Chars/AI-baki
|
Chars
| 2023-12-24T08:03:46Z | 5 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2023-12-24T07:58:25Z |
# AI 巴绮
# 感谢
本项目修改和训练自 [Bert-VITS2-Integration-package](https://github.com/YYuX-1145/Bert-VITS2-Integration-package)
非常感谢Bert-VITS2 制作组
# 使用方法
```shell
git clone xxx
```
安装依赖
```shell
pip install -r requirements.txt
```
请确保有8GB的显存并且是支持cuda
```
python webui.py
```
|
stablediffusionapi/manmaru-mix
|
stablediffusionapi
| 2023-12-24T07:51:45Z | 7 | 1 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-24T07:41:33Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Manmaru mix API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "manmaru-mix"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/manmaru-mix)
Model link: [View model](https://modelslab.com/models/manmaru-mix)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "manmaru-mix",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
stablediffusionapi/realistic-inpaint
|
stablediffusionapi
| 2023-12-24T07:51:44Z | 1 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-24T07:41:28Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Realistic Vision V5.1 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "realistic-inpaint"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/realistic-inpaint)
Model link: [View model](https://modelslab.com/models/realistic-inpaint)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "realistic-inpaint",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
shubnandi/distilbert-base-uncased-lora-txt-clfn
|
shubnandi
| 2023-12-24T07:28:10Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2023-12-24T07:28:08Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
zaenalium/Esteler-DistilBERT-id
|
zaenalium
| 2023-12-24T07:26:08Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"id",
"dataset:zaenalium/esteler-idn-corpus",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-22T06:08:19Z |
---
license: mit
datasets:
- zaenalium/esteler-idn-corpus
language:
- id
pipeline_tag: fill-mask
---
The Finetuned version of DistillBert using Bahasa Indonesia.
|
lxl2023/autotrain-v7eqd-8qq
|
lxl2023
| 2023-12-24T07:18:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:lxl2023/autotrain-data-autotrain-v7eqd-8qq",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T07:18:20Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- lxl2023/autotrain-data-autotrain-v7eqd-8qq
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
aramasethu/llama_lora_test
|
aramasethu
| 2023-12-24T07:17:03Z | 0 | 0 | null |
[
"optimum_habana",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:finetune:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | 2023-12-24T06:36:39Z |
---
license: other
base_model: huggyllama/llama-7b
tags:
- generated_from_trainer
model-index:
- name: llama_lora_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama_lora_test
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.1.0a0+gitf8b6084
- Datasets 2.15.0
- Tokenizers 0.13.3
|
FirstLast/RealisticVision-LoRA-lidrs-2
|
FirstLast
| 2023-12-24T06:44:30Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:SG161222/Realistic_Vision_V5.1_noVAE",
"base_model:adapter:SG161222/Realistic_Vision_V5.1_noVAE",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-24T06:08:42Z |
---
license: creativeml-openrail-m
base_model: SG161222/Realistic_Vision_V5.1_noVAE
instance_prompt: lidrs dress
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - FirstLast/RealisticVision-LoRA-lidrs-2
These are LoRA adaption weights for SG161222/Realistic_Vision_V5.1_noVAE. The weights were trained on lidrs dress using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
martyn/mixtral-megamerge-dare-8x7b-v1
|
martyn
| 2023-12-24T06:44:08Z | 1,549 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mixtral",
"text-generation",
"dare",
"super mario merge",
"merge",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-24T05:26:51Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- dare
- super mario merge
- pytorch
- mixtral
- merge
---
# mixtral megamerge 8x7b v1
The following models were merged with DARE using [https://github.com/martyn/safetensors-merge-supermario](https://github.com/martyn/safetensors-merge-supermario)
## Mergelist
```
cognitivecomputations/dolphin-2.6-mixtral-8x7b
mistralai/Mixtral-8x7B-v0.1
mistralai/Mixtral-8x7B-Instruct-v0.1
```
## Merge command
```
python hf_merge.py mergelist.txt mixtral-merge-1 -p 0.1 -lambda 1.95
```
### Notes
* seems to generalize instruct styles
* MoE gates are not modified
|
getdiffus/SDELj-GhostMix
|
getdiffus
| 2023-12-24T06:31:50Z | 0 | 0 | null |
[
"StableDiffusion",
"GetDiffus",
"anime",
"photorealistic",
"female",
"base model",
"art style",
"game character",
"girls",
"portraits",
"realistic",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-23T20:18:56Z |
---
license: creativeml-openrail-m
tags:
- StableDiffusion
- GetDiffus
- anime
- photorealistic
- female
- base model
- art style
- game character
- girls
- portraits
- realistic
---
# SDELj-GhostMix
SDID: **SDELj**
For details on this model and how to use it, or to find more models, visit [https://getdiffus.com/m/SDELj/GhostMix](https://getdiffus.com/m/SDELj/GhostMix)
GetDiffus is a model sharing site. It supports you to upload, search, and discover Stable Diffusion models. It stores models on Huggingface.
## Links
This StableDiffusion model uploaded by [@lightning-joyce](https://huggingface.co/lightning-joyce).
Follow me on X(Twitter): https://x.com/lightning_joyce
Join our Discord: https://discord.gg/NR7bJXKFpX
|
LoneStriker/Sensei-7B-V1-5.0bpw-h6-exl2
|
LoneStriker
| 2023-12-24T06:21:24Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T06:19:21Z |
---
{}
---
# Sensei-7B-V1 Model Card
Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe's mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as [AgentSearch](https://huggingface.co/datasets/SciPhi/AgentSearch-V1), to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the [docs here](https://agent-search.readthedocs.io/en/latest/) for more information on how to run Sensei end-to-end.
Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration [here](https://search.sciphi.ai/).
## Model Architecture
Base Model: mistral-ft-optimized-1218
**Architecture Features:**
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Using the Model
It is recommended to use a single search query. The model will return an answer using search results as context.
Using the AgentSearch package an example is shown below.
```
export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use `Sensei` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"
```
Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
```
### Instruction:
Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries.
Query:
{prompt}
\n\n
Search Results:
{context}
\n\n
Query:
{prompt}
### Response:
{"summary":
```
__Note__: The inclusion of the text '{"summary":' following the Response footer is intentional. This ensures that the model responds with the proper json format, failure to include this leading prefix can cause small deviaitons. Combining the output with the leading string '{"summary":' results in a properly formatted JSON with keys 'summary' and 'other_queries'.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## References
1. OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
|
AhmedSSoliman/mistral-Mistral-text2SQL
|
AhmedSSoliman
| 2023-12-24T06:13:49Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2023-12-24T06:12:57Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-Mistral-text2SQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-Mistral-text2SQL
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
LoneStriker/Sensei-7B-V1-3.0bpw-h6-exl2
|
LoneStriker
| 2023-12-24T06:10:11Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T06:08:50Z |
---
{}
---
# Sensei-7B-V1 Model Card
Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe's mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as [AgentSearch](https://huggingface.co/datasets/SciPhi/AgentSearch-V1), to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the [docs here](https://agent-search.readthedocs.io/en/latest/) for more information on how to run Sensei end-to-end.
Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration [here](https://search.sciphi.ai/).
## Model Architecture
Base Model: mistral-ft-optimized-1218
**Architecture Features:**
- Transformer-based model
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Using the Model
It is recommended to use a single search query. The model will return an answer using search results as context.
Using the AgentSearch package an example is shown below.
```
export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use `Sensei` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"
```
Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
```
### Instruction:
Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries.
Query:
{prompt}
\n\n
Search Results:
{context}
\n\n
Query:
{prompt}
### Response:
{"summary":
```
__Note__: The inclusion of the text '{"summary":' following the Response footer is intentional. This ensures that the model responds with the proper json format, failure to include this leading prefix can cause small deviaitons. Combining the output with the leading string '{"summary":' results in a properly formatted JSON with keys 'summary' and 'other_queries'.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## References
1. OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
|
mitchyAI/sanamchy
|
mitchyAI
| 2023-12-24T06:03:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-24T06:02:03Z |
---
license: creativeml-openrail-m
---
|
chanhua/autotrain-ar615-cxc9m
|
chanhua
| 2023-12-24T06:02:00Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:chanhua/autotrain-data-autotrain-ar615-cxc9m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T06:01:51Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- chanhua/autotrain-data-autotrain-ar615-cxc9m
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
chanhua/autotrain-g6laz-7afl8
|
chanhua
| 2023-12-24T05:18:35Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:chanhua/autotrain-data-autotrain-g6laz-7afl8",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T05:18:25Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- chanhua/autotrain-data-autotrain-g6laz-7afl8
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 0.16666666666666666
f1_micro: 0.3333333333333333
f1_weighted: 0.16666666666666666
precision_macro: 0.1111111111111111
precision_micro: 0.3333333333333333
precision_weighted: 0.1111111111111111
recall_macro: 0.3333333333333333
recall_micro: 0.3333333333333333
recall_weighted: 0.3333333333333333
accuracy: 0.3333333333333333
|
BBGAME605065444/videomae-base-finetuned-camera_move-subset
|
BBGAME605065444
| 2023-12-24T05:01:44Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-12-12T09:00:24Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-camera_move-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-camera_move-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0146
- Accuracy: 0.7285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 13000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.4019 | 0.01 | 131 | 1.0423 | 0.5960 |
| 1.0428 | 1.01 | 262 | 0.9115 | 0.6291 |
| 0.8694 | 2.01 | 393 | 0.8362 | 0.6424 |
| 0.6894 | 3.01 | 524 | 0.7107 | 0.7086 |
| 0.6741 | 4.01 | 655 | 0.6441 | 0.7219 |
| 0.5955 | 5.01 | 786 | 0.8368 | 0.6821 |
| 0.5898 | 6.01 | 917 | 0.7774 | 0.7152 |
| 0.5252 | 7.01 | 1048 | 0.6022 | 0.7682 |
| 0.453 | 8.01 | 1179 | 0.6388 | 0.7881 |
| 0.4625 | 9.01 | 1310 | 0.8417 | 0.7152 |
| 0.465 | 10.01 | 1441 | 0.8575 | 0.6954 |
| 0.5106 | 11.01 | 1572 | 0.7849 | 0.7616 |
| 0.4138 | 12.01 | 1703 | 0.9293 | 0.7086 |
| 0.3526 | 13.01 | 1834 | 1.2684 | 0.6225 |
| 0.3645 | 14.01 | 1965 | 0.9428 | 0.7285 |
| 0.2778 | 15.01 | 2096 | 1.0217 | 0.6821 |
| 0.2821 | 16.01 | 2227 | 1.3365 | 0.6623 |
| 0.2654 | 17.01 | 2358 | 1.0170 | 0.7550 |
| 0.2961 | 18.01 | 2489 | 1.3952 | 0.7152 |
| 0.1987 | 19.01 | 2620 | 1.4083 | 0.6954 |
| 0.1832 | 20.01 | 2751 | 1.5808 | 0.6689 |
| 0.2035 | 21.01 | 2882 | 1.2864 | 0.7483 |
| 0.1608 | 22.01 | 3013 | 1.7499 | 0.6755 |
| 0.2171 | 23.01 | 3144 | 1.2574 | 0.7417 |
| 0.1824 | 24.01 | 3275 | 1.4203 | 0.7483 |
| 0.152 | 25.01 | 3406 | 1.4163 | 0.7351 |
| 0.1512 | 26.01 | 3537 | 1.5550 | 0.7086 |
| 0.1635 | 27.01 | 3668 | 1.2334 | 0.7682 |
| 0.1371 | 28.01 | 3799 | 1.5798 | 0.7417 |
| 0.1048 | 29.01 | 3930 | 1.6993 | 0.7219 |
| 0.097 | 30.01 | 4061 | 1.8505 | 0.6887 |
| 0.1662 | 31.01 | 4192 | 1.5091 | 0.7417 |
| 0.0957 | 32.01 | 4323 | 1.5388 | 0.7285 |
| 0.0967 | 33.01 | 4454 | 1.6766 | 0.7219 |
| 0.1581 | 34.01 | 4585 | 1.5444 | 0.7550 |
| 0.0967 | 35.01 | 4716 | 1.6108 | 0.7219 |
| 0.1013 | 36.01 | 4847 | 1.6734 | 0.7417 |
| 0.0691 | 37.01 | 4978 | 1.5116 | 0.7616 |
| 0.1161 | 38.01 | 5109 | 1.6343 | 0.7285 |
| 0.0847 | 39.01 | 5240 | 1.6119 | 0.7616 |
| 0.0769 | 40.01 | 5371 | 1.8494 | 0.7285 |
| 0.0604 | 41.01 | 5502 | 1.9854 | 0.7020 |
| 0.0845 | 42.01 | 5633 | 1.9722 | 0.7020 |
| 0.0808 | 43.01 | 5764 | 1.9046 | 0.7219 |
| 0.091 | 44.01 | 5895 | 1.8344 | 0.7152 |
| 0.1249 | 45.01 | 6026 | 1.7306 | 0.7219 |
| 0.0602 | 46.01 | 6157 | 1.8982 | 0.7550 |
| 0.0901 | 47.01 | 6288 | 1.9487 | 0.6954 |
| 0.081 | 48.01 | 6419 | 1.6702 | 0.7417 |
| 0.0336 | 49.01 | 6550 | 1.5591 | 0.7682 |
| 0.0975 | 50.01 | 6681 | 1.9009 | 0.7086 |
| 0.0697 | 51.01 | 6812 | 1.6135 | 0.7616 |
| 0.0474 | 52.01 | 6943 | 1.5886 | 0.7682 |
| 0.0427 | 53.01 | 7074 | 1.6915 | 0.7616 |
| 0.0774 | 54.01 | 7205 | 1.7434 | 0.7417 |
| 0.0685 | 55.01 | 7336 | 1.9552 | 0.7152 |
| 0.0531 | 56.01 | 7467 | 1.7448 | 0.7550 |
| 0.0371 | 57.01 | 7598 | 1.8175 | 0.7483 |
| 0.0244 | 58.01 | 7729 | 1.9252 | 0.7351 |
| 0.0317 | 59.01 | 7860 | 2.1868 | 0.6954 |
| 0.0813 | 60.01 | 7991 | 1.8846 | 0.7417 |
| 0.0857 | 61.01 | 8122 | 1.7532 | 0.7285 |
| 0.0424 | 62.01 | 8253 | 1.8003 | 0.7550 |
| 0.0481 | 63.01 | 8384 | 1.7935 | 0.7351 |
| 0.0495 | 64.01 | 8515 | 2.0369 | 0.7219 |
| 0.0217 | 65.01 | 8646 | 2.0552 | 0.7219 |
| 0.0303 | 66.01 | 8777 | 1.8629 | 0.7417 |
| 0.0439 | 67.01 | 8908 | 1.9953 | 0.7417 |
| 0.0518 | 68.01 | 9039 | 1.9958 | 0.7351 |
| 0.0295 | 69.01 | 9170 | 1.8984 | 0.7616 |
| 0.0485 | 70.01 | 9301 | 1.7583 | 0.7483 |
| 0.0149 | 71.01 | 9432 | 1.8815 | 0.7351 |
| 0.0107 | 72.01 | 9563 | 1.8341 | 0.7616 |
| 0.0475 | 73.01 | 9694 | 1.7961 | 0.7351 |
| 0.0318 | 74.01 | 9825 | 2.0999 | 0.7285 |
| 0.0108 | 75.01 | 9956 | 2.0441 | 0.7219 |
| 0.0085 | 76.01 | 10087 | 1.9867 | 0.7351 |
| 0.0278 | 77.01 | 10218 | 1.7968 | 0.7682 |
| 0.0106 | 78.01 | 10349 | 1.8662 | 0.7682 |
| 0.0241 | 79.01 | 10480 | 1.8388 | 0.7682 |
| 0.0303 | 80.01 | 10611 | 1.8758 | 0.7682 |
| 0.0199 | 81.01 | 10742 | 1.9275 | 0.7483 |
| 0.0052 | 82.01 | 10873 | 1.8741 | 0.7417 |
| 0.007 | 83.01 | 11004 | 2.0243 | 0.7219 |
| 0.0101 | 84.01 | 11135 | 1.9947 | 0.7285 |
| 0.0184 | 85.01 | 11266 | 1.9342 | 0.7417 |
| 0.0062 | 86.01 | 11397 | 2.0611 | 0.7351 |
| 0.018 | 87.01 | 11528 | 1.9867 | 0.7550 |
| 0.0194 | 88.01 | 11659 | 1.8631 | 0.7483 |
| 0.0396 | 89.01 | 11790 | 1.9126 | 0.7417 |
| 0.0048 | 90.01 | 11921 | 1.8788 | 0.7351 |
| 0.0044 | 91.01 | 12052 | 1.8962 | 0.7351 |
| 0.0033 | 92.01 | 12183 | 1.9455 | 0.7351 |
| 0.0052 | 93.01 | 12314 | 1.9772 | 0.7351 |
| 0.0068 | 94.01 | 12445 | 2.0047 | 0.7351 |
| 0.0046 | 95.01 | 12576 | 2.0135 | 0.7351 |
| 0.0158 | 96.01 | 12707 | 2.0176 | 0.7351 |
| 0.0074 | 97.01 | 12838 | 2.0148 | 0.7285 |
| 0.0016 | 98.01 | 12969 | 2.0152 | 0.7285 |
| 0.0051 | 99.0 | 13000 | 2.0146 | 0.7285 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
imfarzanansari/skintelligent-acne
|
imfarzanansari
| 2023-12-24T04:46:50Z | 96,167 | 9 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T04:24:55Z |
---
license: mit
language:
- en
metrics:
- precision
library_name: transformers
pipeline_tag: image-classification
---
# Acne Severity Detection Model
## Overview
This model card provides documentation for the Acne Severity Detection model checkpoint used in the Hugging Face pipeline. The model is designed to assess acne severity levels, ranging from clear skin to very severe acne.
## Model Details
The checkpoint includes the following files:
- **`config.json`**: Model configuration settings.
- **`model.safetensors`**: Serialized model parameters and architecture.
- **`optimizer.pt`**: Optimizer state capturing the current model optimization.
- **`preprocessor_config.json`**: Configuration file for the preprocessor.
- **`rng_state.pth`**: Random number generator state for reproducibility.
- **`scheduler.pt`**: Scheduler state for controlling learning rate schedules.
- **`trainer_state.json`**: Trainer state with information about the training process.
- **`training_args.bin`**: Binary file storing training arguments.
## Usage
To utilize the model checkpoint, follow these steps:
1. Download this repository.
2. Ensure the required dependencies are installed (`pip install -r requirements.txt`).
## Severity Levels
- **Level -1**: Clear Skin
- **Level 0**: Occasional Spots
- **Level 1**: Mild Acne
- **Level 2**: Moderate Acne
- **Level 3**: Severe Acne
- **Level 4**: Very Severe Acne
## Important Notes
- The model card provides insight into the model's purpose, capabilities, and usage instructions.
- Ensure all necessary files are present in the `checkpoint` directory for proper functionality.
## License
This Acne Severity Detection model checkpoint is licensed under the [MIT License](LICENSE). Please review and adhere to the license when using or modifying the code.
|
ddh0/phi-2-GGUF-fp16
|
ddh0
| 2023-12-24T04:45:35Z | 8 | 2 | null |
[
"gguf",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T04:13:14Z |
---
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/blob/main/LICENSE
pipeline_tag: text-generation
---
This is Microsoft's [Phi-2](https://huggingface.co/microsoft/phi-2), converted to GGUF without quantization. No other changes were made.
The model was converted using `convert-hf-to-gguf.py` from Georgi Gerganov's llama.cpp repo, release `b1671`.
All credit belongs to [Microsoft](https://huggingface.co/microsoft) for training and releasing this model. Thank you!
|
ntc-ai/SDXL-LoRA-slider.gold-eye-color
|
ntc-ai
| 2023-12-24T04:44:38Z | 147 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-24T04:44:34Z |
---
language:
- en
thumbnail: "images/evaluate/gold eye color...green eye color/gold eye color_17_3.0.png"
widget:
- text: gold eye color
output:
url: images/gold eye color_17_3.0.png
- text: gold eye color
output:
url: images/gold eye color_19_3.0.png
- text: gold eye color
output:
url: images/gold eye color_20_3.0.png
- text: gold eye color
output:
url: images/gold eye color_21_3.0.png
- text: gold eye color
output:
url: images/gold eye color_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "gold eye color"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - gold eye color (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/gold eye color_17_-3.0.png" width=256 height=256 /> | <img src="images/gold eye color_17_0.0.png" width=256 height=256 /> | <img src="images/gold eye color_17_3.0.png" width=256 height=256 /> |
| <img src="images/gold eye color_19_-3.0.png" width=256 height=256 /> | <img src="images/gold eye color_19_0.0.png" width=256 height=256 /> | <img src="images/gold eye color_19_3.0.png" width=256 height=256 /> |
| <img src="images/gold eye color_20_-3.0.png" width=256 height=256 /> | <img src="images/gold eye color_20_0.0.png" width=256 height=256 /> | <img src="images/gold eye color_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
gold eye color
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.gold-eye-color', weight_name='gold eye color.safetensors', adapter_name="gold eye color")
# Activate the LoRA
pipe.set_adapters(["gold eye color"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, gold eye color"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 580+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
fasterinnerlooper/codeBERTa-csharp
|
fasterinnerlooper
| 2023-12-24T04:28:34Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"en",
"dataset:microsoft/LCC_csharp",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-11-23T03:49:35Z |
---
base_model: huggingface/CodeBERTa-small-v1
tags:
- generated_from_trainer
model-index:
- name: models
results: []
license: mit
datasets:
- microsoft/LCC_csharp
language:
- en
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# models
This model is a fine-tuned version of [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on an the [Microsoft/LCC_csharp](https://huggingface.com/microsoft/lcc_csharp) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold5
|
hkivancoral
| 2023-12-24T03:56:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T03:17:28Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8780487804878049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3162
- Accuracy: 0.8780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2212 | 1.0 | 220 | 0.5037 | 0.7805 |
| 0.0167 | 2.0 | 440 | 0.4288 | 0.8049 |
| 0.0048 | 3.0 | 660 | 0.5660 | 0.8293 |
| 0.001 | 4.0 | 880 | 0.5808 | 0.8049 |
| 0.0006 | 5.0 | 1100 | 0.5916 | 0.8049 |
| 0.0005 | 6.0 | 1320 | 0.6221 | 0.8293 |
| 0.0003 | 7.0 | 1540 | 0.6354 | 0.8293 |
| 0.0002 | 8.0 | 1760 | 0.6592 | 0.8293 |
| 0.0002 | 9.0 | 1980 | 0.6836 | 0.8293 |
| 0.0001 | 10.0 | 2200 | 0.7195 | 0.8537 |
| 0.0001 | 11.0 | 2420 | 0.7292 | 0.8293 |
| 0.0001 | 12.0 | 2640 | 0.7556 | 0.8537 |
| 0.0001 | 13.0 | 2860 | 0.7481 | 0.8537 |
| 0.0001 | 14.0 | 3080 | 0.7541 | 0.8537 |
| 0.0 | 15.0 | 3300 | 0.7642 | 0.8537 |
| 0.0 | 16.0 | 3520 | 0.7944 | 0.8537 |
| 0.0 | 17.0 | 3740 | 0.8081 | 0.8537 |
| 0.0 | 18.0 | 3960 | 0.8431 | 0.8537 |
| 0.0 | 19.0 | 4180 | 0.8377 | 0.8537 |
| 0.0 | 20.0 | 4400 | 0.8619 | 0.8537 |
| 0.0 | 21.0 | 4620 | 0.8688 | 0.8537 |
| 0.0 | 22.0 | 4840 | 0.9067 | 0.8537 |
| 0.0 | 23.0 | 5060 | 0.9298 | 0.8537 |
| 0.0 | 24.0 | 5280 | 0.9319 | 0.8537 |
| 0.0 | 25.0 | 5500 | 0.9416 | 0.8537 |
| 0.0 | 26.0 | 5720 | 0.9575 | 0.8537 |
| 0.0 | 27.0 | 5940 | 0.9826 | 0.8537 |
| 0.0 | 28.0 | 6160 | 0.9800 | 0.8537 |
| 0.0 | 29.0 | 6380 | 0.9999 | 0.8537 |
| 0.0 | 30.0 | 6600 | 1.0189 | 0.8537 |
| 0.0 | 31.0 | 6820 | 1.0648 | 0.8537 |
| 0.0 | 32.0 | 7040 | 1.0627 | 0.8537 |
| 0.0 | 33.0 | 7260 | 1.0899 | 0.8780 |
| 0.0 | 34.0 | 7480 | 1.1141 | 0.8780 |
| 0.0 | 35.0 | 7700 | 1.1351 | 0.8537 |
| 0.0 | 36.0 | 7920 | 1.1265 | 0.8780 |
| 0.0 | 37.0 | 8140 | 1.1654 | 0.8780 |
| 0.0 | 38.0 | 8360 | 1.1754 | 0.8780 |
| 0.0 | 39.0 | 8580 | 1.1881 | 0.8780 |
| 0.0 | 40.0 | 8800 | 1.1930 | 0.8780 |
| 0.0 | 41.0 | 9020 | 1.2376 | 0.8780 |
| 0.0 | 42.0 | 9240 | 1.2450 | 0.8780 |
| 0.0 | 43.0 | 9460 | 1.2371 | 0.8780 |
| 0.0 | 44.0 | 9680 | 1.2839 | 0.8780 |
| 0.0 | 45.0 | 9900 | 1.2844 | 0.8780 |
| 0.0 | 46.0 | 10120 | 1.2849 | 0.8780 |
| 0.0 | 47.0 | 10340 | 1.3098 | 0.8780 |
| 0.0 | 48.0 | 10560 | 1.3232 | 0.8780 |
| 0.0 | 49.0 | 10780 | 1.3105 | 0.8780 |
| 0.0 | 50.0 | 11000 | 1.3162 | 0.8780 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
archisin/ppo-Huggy
|
archisin
| 2023-12-24T03:50:14Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-24T03:50:02Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: archishin/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hkivancoral/hushem_40x_deit_base_sgd_001_fold5
|
hkivancoral
| 2023-12-24T03:48:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T03:04:20Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8048780487804879
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_001_fold5
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5235
- Accuracy: 0.8049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2691 | 1.0 | 220 | 1.3176 | 0.3415 |
| 1.1324 | 2.0 | 440 | 1.2295 | 0.4634 |
| 1.003 | 3.0 | 660 | 1.1173 | 0.6341 |
| 0.8718 | 4.0 | 880 | 0.9888 | 0.6585 |
| 0.7662 | 5.0 | 1100 | 0.8700 | 0.6829 |
| 0.6305 | 6.0 | 1320 | 0.7780 | 0.6585 |
| 0.552 | 7.0 | 1540 | 0.7068 | 0.6829 |
| 0.4791 | 8.0 | 1760 | 0.6670 | 0.6829 |
| 0.413 | 9.0 | 1980 | 0.6302 | 0.6829 |
| 0.3827 | 10.0 | 2200 | 0.6050 | 0.7073 |
| 0.3215 | 11.0 | 2420 | 0.5880 | 0.7073 |
| 0.2953 | 12.0 | 2640 | 0.5689 | 0.7073 |
| 0.2691 | 13.0 | 2860 | 0.5551 | 0.7073 |
| 0.255 | 14.0 | 3080 | 0.5391 | 0.7317 |
| 0.2205 | 15.0 | 3300 | 0.5338 | 0.7561 |
| 0.2031 | 16.0 | 3520 | 0.5276 | 0.8049 |
| 0.1827 | 17.0 | 3740 | 0.5158 | 0.8049 |
| 0.178 | 18.0 | 3960 | 0.5117 | 0.8049 |
| 0.1722 | 19.0 | 4180 | 0.5070 | 0.8293 |
| 0.1354 | 20.0 | 4400 | 0.5054 | 0.8293 |
| 0.1154 | 21.0 | 4620 | 0.5008 | 0.8293 |
| 0.1032 | 22.0 | 4840 | 0.5031 | 0.8293 |
| 0.123 | 23.0 | 5060 | 0.5052 | 0.8293 |
| 0.0925 | 24.0 | 5280 | 0.5012 | 0.8049 |
| 0.1004 | 25.0 | 5500 | 0.5002 | 0.8293 |
| 0.1106 | 26.0 | 5720 | 0.5000 | 0.8293 |
| 0.0932 | 27.0 | 5940 | 0.5018 | 0.8293 |
| 0.0974 | 28.0 | 6160 | 0.5069 | 0.8293 |
| 0.0749 | 29.0 | 6380 | 0.5067 | 0.8293 |
| 0.0626 | 30.0 | 6600 | 0.5071 | 0.8293 |
| 0.058 | 31.0 | 6820 | 0.5023 | 0.8293 |
| 0.0771 | 32.0 | 7040 | 0.5068 | 0.8293 |
| 0.0537 | 33.0 | 7260 | 0.5089 | 0.8049 |
| 0.0443 | 34.0 | 7480 | 0.5110 | 0.8049 |
| 0.0529 | 35.0 | 7700 | 0.5102 | 0.8049 |
| 0.056 | 36.0 | 7920 | 0.5123 | 0.8293 |
| 0.0373 | 37.0 | 8140 | 0.5147 | 0.8293 |
| 0.0662 | 38.0 | 8360 | 0.5122 | 0.8293 |
| 0.0489 | 39.0 | 8580 | 0.5155 | 0.8293 |
| 0.0389 | 40.0 | 8800 | 0.5166 | 0.8293 |
| 0.0414 | 41.0 | 9020 | 0.5205 | 0.8049 |
| 0.0455 | 42.0 | 9240 | 0.5225 | 0.8293 |
| 0.0397 | 43.0 | 9460 | 0.5226 | 0.8049 |
| 0.0345 | 44.0 | 9680 | 0.5228 | 0.8049 |
| 0.0281 | 45.0 | 9900 | 0.5217 | 0.8049 |
| 0.0392 | 46.0 | 10120 | 0.5231 | 0.8049 |
| 0.0436 | 47.0 | 10340 | 0.5235 | 0.8293 |
| 0.0347 | 48.0 | 10560 | 0.5238 | 0.8049 |
| 0.0331 | 49.0 | 10780 | 0.5237 | 0.8049 |
| 0.0457 | 50.0 | 11000 | 0.5235 | 0.8049 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold4
|
hkivancoral
| 2023-12-24T03:17:18Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T02:30:39Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9523809523809523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2776
- Accuracy: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2891 | 1.0 | 219 | 0.3655 | 0.9048 |
| 0.0271 | 2.0 | 438 | 0.1551 | 0.9762 |
| 0.0059 | 3.0 | 657 | 0.1424 | 0.9762 |
| 0.0011 | 4.0 | 876 | 0.1398 | 0.9762 |
| 0.0007 | 5.0 | 1095 | 0.1496 | 0.9762 |
| 0.0005 | 6.0 | 1314 | 0.1466 | 0.9762 |
| 0.0003 | 7.0 | 1533 | 0.1409 | 0.9762 |
| 0.0002 | 8.0 | 1752 | 0.1498 | 0.9762 |
| 0.0002 | 9.0 | 1971 | 0.1564 | 0.9762 |
| 0.0001 | 10.0 | 2190 | 0.1656 | 0.9524 |
| 0.0001 | 11.0 | 2409 | 0.1807 | 0.9524 |
| 0.0001 | 12.0 | 2628 | 0.1735 | 0.9762 |
| 0.0001 | 13.0 | 2847 | 0.1728 | 0.9762 |
| 0.0001 | 14.0 | 3066 | 0.1752 | 0.9762 |
| 0.0 | 15.0 | 3285 | 0.1830 | 0.9524 |
| 0.0 | 16.0 | 3504 | 0.1909 | 0.9762 |
| 0.0 | 17.0 | 3723 | 0.1856 | 0.9762 |
| 0.0 | 18.0 | 3942 | 0.1931 | 0.9762 |
| 0.0 | 19.0 | 4161 | 0.1937 | 0.9762 |
| 0.0 | 20.0 | 4380 | 0.2012 | 0.9762 |
| 0.0 | 21.0 | 4599 | 0.1972 | 0.9762 |
| 0.0 | 22.0 | 4818 | 0.2059 | 0.9762 |
| 0.0 | 23.0 | 5037 | 0.2072 | 0.9762 |
| 0.0 | 24.0 | 5256 | 0.2139 | 0.9762 |
| 0.0 | 25.0 | 5475 | 0.2220 | 0.9524 |
| 0.0 | 26.0 | 5694 | 0.2242 | 0.9762 |
| 0.0 | 27.0 | 5913 | 0.2291 | 0.9524 |
| 0.0 | 28.0 | 6132 | 0.2302 | 0.9524 |
| 0.0 | 29.0 | 6351 | 0.2283 | 0.9524 |
| 0.0 | 30.0 | 6570 | 0.2384 | 0.9524 |
| 0.0 | 31.0 | 6789 | 0.2437 | 0.9524 |
| 0.0 | 32.0 | 7008 | 0.2389 | 0.9762 |
| 0.0 | 33.0 | 7227 | 0.2474 | 0.9524 |
| 0.0 | 34.0 | 7446 | 0.2474 | 0.9524 |
| 0.0 | 35.0 | 7665 | 0.2453 | 0.9524 |
| 0.0 | 36.0 | 7884 | 0.2498 | 0.9524 |
| 0.0 | 37.0 | 8103 | 0.2535 | 0.9524 |
| 0.0 | 38.0 | 8322 | 0.2499 | 0.9762 |
| 0.0 | 39.0 | 8541 | 0.2607 | 0.9524 |
| 0.0 | 40.0 | 8760 | 0.2656 | 0.9524 |
| 0.0 | 41.0 | 8979 | 0.2652 | 0.9524 |
| 0.0 | 42.0 | 9198 | 0.2609 | 0.9524 |
| 0.0 | 43.0 | 9417 | 0.2697 | 0.9524 |
| 0.0 | 44.0 | 9636 | 0.2693 | 0.9524 |
| 0.0 | 45.0 | 9855 | 0.2763 | 0.9524 |
| 0.0 | 46.0 | 10074 | 0.2779 | 0.9524 |
| 0.0 | 47.0 | 10293 | 0.2750 | 0.9524 |
| 0.0 | 48.0 | 10512 | 0.2730 | 0.9524 |
| 0.0 | 49.0 | 10731 | 0.2766 | 0.9524 |
| 0.0 | 50.0 | 10950 | 0.2776 | 0.9524 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
bandhit/typhoon-7b-q4-bnb_cuda-ts-1703352224
|
bandhit
| 2023-12-24T03:15:24Z | 23 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"th",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2023-12-23T17:25:52Z |
---
license: apache-2.0
language:
- th
library_name: transformers
pipeline_tag: text-generation
---
# Summary
A 4-bits quantization of [scb10x/typhoon-7b](https://huggingface.co/scb10x/typhoon-7b) with only less than 8 GB VRAM is required.
# Steps to reproduce
```python
# init parameters
model_name: str = 'scb10x/typhoon-7b'
quantization_mode: str = 'q4-bnb_cuda' # possible values = {'q4-bnb_cuda', 'q8-bnb_cuda', 'q4-torch_ptdq', 'q8-torch_ptdq'}
# load tokenizer
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token_id = tokenizer.eos_token_id
print(tokenizer) # LlamaTokenizerFast
# load model
import torch
from transformers import AutoModelForCausalLM
if quantization_mode == 'q4-bnb_cuda': # ampere architecture with 8gb vram + cpu with 20gb is recommended
print('4-bits bitsandbytes quantization with cuda')
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit = True,
device_map = 'auto',
torch_dtype = torch.bfloat16)
elif quantization_mode == 'q8-bnb_cuda': # ampere architecture with 12gb vram + cpu with 20gb is recommended
print('8-bits bitsandbytes quantization with cuda')
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_8bit = True,
device_map = 'auto',
torch_dtype = torch.bfloat16)
elif quantization_mode == 'q4-torch_ptdq': # cpu with 64gb++ ram is recommended
print('4-bits x2 post training dynamic quantization')
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype = torch.float32)
model = torch.quantization.quantize_dynamic(base_model, dtype = torch.quint4x2)
elif quantization_mode == 'q8-torch_ptdq': # cpu with 64gb++ ram is recommended
print('8-bits post training dynamic quantization')
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype = torch.float32)
model = torch.quantization.quantize_dynamic(base_model, dtype = torch.quint8)
else:
print('default model')
model = AutoModelForCausalLM.from_pretrained(model_name)
print(model) # MistralForCausalLM
# text generator
from transformers import GenerationConfig, TextGenerationPipeline
config = GenerationConfig.from_pretrained(model_name)
config.num_return_sequences: int = 1
config.do_sample: bool = True
config.max_new_tokens: int = 128
config.temperature: float = 0.7
config.top_p: float = 0.95
config.repetition_penalty: float = 1.3
generator = TextGenerationPipeline(
model = model,
tokenizer = tokenizer,
return_full_text = True,
generation_config = config)
# sample
sample: str = 'ความหมายของชีวิตคืออะไร?\n'
output = generator(sample, pad_token_id = tokenizer.eos_token_id)
print(output[0]['generated_text'])
```
# `requirement.txt`
```txt
torch==2.1.2
accelerate==0.25.0
bitsandbytes==0.41.3
#transformers==4.37.0.dev0
transformers @ git+https://github.com/huggingface/transformers
```
|
hkivancoral/hushem_40x_deit_base_sgd_001_fold4
|
hkivancoral
| 2023-12-24T03:04:11Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T02:21:00Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_001_fold4
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9285714285714286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_001_fold4
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2742 | 1.0 | 219 | 1.3421 | 0.4286 |
| 1.1364 | 2.0 | 438 | 1.2693 | 0.4048 |
| 0.9912 | 3.0 | 657 | 1.1701 | 0.5238 |
| 0.8292 | 4.0 | 876 | 1.0493 | 0.6429 |
| 0.7771 | 5.0 | 1095 | 0.9148 | 0.7143 |
| 0.6956 | 6.0 | 1314 | 0.8048 | 0.7381 |
| 0.519 | 7.0 | 1533 | 0.7062 | 0.8095 |
| 0.5042 | 8.0 | 1752 | 0.6401 | 0.7857 |
| 0.4397 | 9.0 | 1971 | 0.5785 | 0.8333 |
| 0.3933 | 10.0 | 2190 | 0.5338 | 0.8571 |
| 0.341 | 11.0 | 2409 | 0.4959 | 0.8810 |
| 0.3345 | 12.0 | 2628 | 0.4569 | 0.8810 |
| 0.2949 | 13.0 | 2847 | 0.4265 | 0.9048 |
| 0.2608 | 14.0 | 3066 | 0.3999 | 0.9286 |
| 0.2368 | 15.0 | 3285 | 0.3796 | 0.9286 |
| 0.2257 | 16.0 | 3504 | 0.3614 | 0.9286 |
| 0.232 | 17.0 | 3723 | 0.3430 | 0.9286 |
| 0.1928 | 18.0 | 3942 | 0.3249 | 0.9286 |
| 0.1804 | 19.0 | 4161 | 0.3144 | 0.9286 |
| 0.1542 | 20.0 | 4380 | 0.3019 | 0.9048 |
| 0.1333 | 21.0 | 4599 | 0.2915 | 0.9286 |
| 0.1333 | 22.0 | 4818 | 0.2894 | 0.9048 |
| 0.1178 | 23.0 | 5037 | 0.2746 | 0.9286 |
| 0.1098 | 24.0 | 5256 | 0.2771 | 0.9048 |
| 0.1099 | 25.0 | 5475 | 0.2649 | 0.9048 |
| 0.0836 | 26.0 | 5694 | 0.2732 | 0.9048 |
| 0.0751 | 27.0 | 5913 | 0.2625 | 0.9048 |
| 0.0745 | 28.0 | 6132 | 0.2608 | 0.9048 |
| 0.0826 | 29.0 | 6351 | 0.2526 | 0.9048 |
| 0.079 | 30.0 | 6570 | 0.2463 | 0.9286 |
| 0.0659 | 31.0 | 6789 | 0.2439 | 0.9048 |
| 0.0738 | 32.0 | 7008 | 0.2422 | 0.9286 |
| 0.0683 | 33.0 | 7227 | 0.2335 | 0.9286 |
| 0.0674 | 34.0 | 7446 | 0.2343 | 0.9048 |
| 0.0633 | 35.0 | 7665 | 0.2311 | 0.9048 |
| 0.0608 | 36.0 | 7884 | 0.2259 | 0.9286 |
| 0.0543 | 37.0 | 8103 | 0.2239 | 0.9286 |
| 0.0444 | 38.0 | 8322 | 0.2256 | 0.9286 |
| 0.0496 | 39.0 | 8541 | 0.2255 | 0.9286 |
| 0.0513 | 40.0 | 8760 | 0.2253 | 0.9286 |
| 0.0449 | 41.0 | 8979 | 0.2226 | 0.9286 |
| 0.0449 | 42.0 | 9198 | 0.2216 | 0.9286 |
| 0.0549 | 43.0 | 9417 | 0.2202 | 0.9286 |
| 0.0488 | 44.0 | 9636 | 0.2213 | 0.9286 |
| 0.0437 | 45.0 | 9855 | 0.2208 | 0.9286 |
| 0.0362 | 46.0 | 10074 | 0.2201 | 0.9286 |
| 0.0622 | 47.0 | 10293 | 0.2188 | 0.9286 |
| 0.0546 | 48.0 | 10512 | 0.2185 | 0.9286 |
| 0.0472 | 49.0 | 10731 | 0.2186 | 0.9286 |
| 0.0581 | 50.0 | 10950 | 0.2186 | 0.9286 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
geektech/t5-v1_1-xl-lora
|
geektech
| 2023-12-24T02:57:39Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/t5-v1_1-xl",
"base_model:adapter:google/t5-v1_1-xl",
"region:us"
] | null | 2023-12-23T17:27:05Z |
---
library_name: peft
base_model: google/t5-v1_1-xl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
fliarbi/t5-flan-base-name-detailes-about-history-mission
|
fliarbi
| 2023-12-24T02:51:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-23T12:41:21Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-flan-base-name-detailes-about-history-mission
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-flan-base-name-detailes-about-history-mission
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
helpmefindaname/flair-eml-sapbert-ncbi-disease
|
helpmefindaname
| 2023-12-24T02:38:37Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T02:23:44Z |
---
tags:
- flair
- entity-mention-linker
---
## sapbert-ncbi-disease
Biomedical Entity Mention Linking for diseases
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-sapbert-ncbi-disease")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-ncbi-disease", "diseases", dictionary_name_or_path="ctd-diseases", hybrid_search=False, entity_type="diseases-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-sapbert-ncbi-disease](dmis-lab/biosyn-sapbert-ncbi-disease) as embeddings for linking mentions to candidates.
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold3
|
hkivancoral
| 2023-12-24T02:30:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T01:44:29Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9302325581395349
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7075
- Accuracy: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2263 | 1.0 | 217 | 0.4817 | 0.7907 |
| 0.0168 | 2.0 | 434 | 0.4089 | 0.8605 |
| 0.0026 | 3.0 | 651 | 0.3730 | 0.9070 |
| 0.0013 | 4.0 | 868 | 0.4093 | 0.9070 |
| 0.0007 | 5.0 | 1085 | 0.4236 | 0.9070 |
| 0.0005 | 6.0 | 1302 | 0.4344 | 0.9070 |
| 0.0004 | 7.0 | 1519 | 0.4366 | 0.9070 |
| 0.0003 | 8.0 | 1736 | 0.4561 | 0.9070 |
| 0.0002 | 9.0 | 1953 | 0.4646 | 0.9070 |
| 0.0001 | 10.0 | 2170 | 0.4712 | 0.9070 |
| 0.0001 | 11.0 | 2387 | 0.4696 | 0.9070 |
| 0.0001 | 12.0 | 2604 | 0.4779 | 0.9070 |
| 0.0001 | 13.0 | 2821 | 0.4883 | 0.9070 |
| 0.0001 | 14.0 | 3038 | 0.4911 | 0.9070 |
| 0.0 | 15.0 | 3255 | 0.4887 | 0.9070 |
| 0.0 | 16.0 | 3472 | 0.5049 | 0.9070 |
| 0.0 | 17.0 | 3689 | 0.5115 | 0.9070 |
| 0.0 | 18.0 | 3906 | 0.5246 | 0.9070 |
| 0.0 | 19.0 | 4123 | 0.5207 | 0.9070 |
| 0.0 | 20.0 | 4340 | 0.5310 | 0.9070 |
| 0.0 | 21.0 | 4557 | 0.5341 | 0.9070 |
| 0.0 | 22.0 | 4774 | 0.5389 | 0.9070 |
| 0.0 | 23.0 | 4991 | 0.5470 | 0.9070 |
| 0.0 | 24.0 | 5208 | 0.5525 | 0.9070 |
| 0.0 | 25.0 | 5425 | 0.5607 | 0.9070 |
| 0.0 | 26.0 | 5642 | 0.5630 | 0.9070 |
| 0.0 | 27.0 | 5859 | 0.5707 | 0.9302 |
| 0.0 | 28.0 | 6076 | 0.5785 | 0.9302 |
| 0.0 | 29.0 | 6293 | 0.5816 | 0.9302 |
| 0.0 | 30.0 | 6510 | 0.5927 | 0.9302 |
| 0.0 | 31.0 | 6727 | 0.6021 | 0.9302 |
| 0.0 | 32.0 | 6944 | 0.6045 | 0.9302 |
| 0.0 | 33.0 | 7161 | 0.6209 | 0.9302 |
| 0.0 | 34.0 | 7378 | 0.6273 | 0.9302 |
| 0.0 | 35.0 | 7595 | 0.6296 | 0.9302 |
| 0.0 | 36.0 | 7812 | 0.6372 | 0.9302 |
| 0.0 | 37.0 | 8029 | 0.6432 | 0.9302 |
| 0.0 | 38.0 | 8246 | 0.6544 | 0.9302 |
| 0.0 | 39.0 | 8463 | 0.6520 | 0.9302 |
| 0.0 | 40.0 | 8680 | 0.6641 | 0.9302 |
| 0.0 | 41.0 | 8897 | 0.6713 | 0.9302 |
| 0.0 | 42.0 | 9114 | 0.6757 | 0.9302 |
| 0.0 | 43.0 | 9331 | 0.6829 | 0.9302 |
| 0.0 | 44.0 | 9548 | 0.6913 | 0.9302 |
| 0.0 | 45.0 | 9765 | 0.6942 | 0.9302 |
| 0.0 | 46.0 | 9982 | 0.7019 | 0.9302 |
| 0.0 | 47.0 | 10199 | 0.7046 | 0.9302 |
| 0.0 | 48.0 | 10416 | 0.7061 | 0.9302 |
| 0.0 | 49.0 | 10633 | 0.7073 | 0.9302 |
| 0.0 | 50.0 | 10850 | 0.7075 | 0.9302 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
helpmefindaname/flair-eml-sapbert-bc5cdr-chemical
|
helpmefindaname
| 2023-12-24T02:23:43Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T01:23:10Z |
---
tags:
- flair
- entity-mention-linker
---
## sapbert-bc5cdr-chemical
Biomedical Entity Mention Linking for chemical
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-sapbert-bc5cdr-chemical")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc5cdr-chemical", "chemical", dictionary_name_or_path="ctd-chemicals", hybrid_search=False, entity_type="chemical-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-sapbert-bc5cdr-chemical](dmis-lab/biosyn-sapbert-bc5cdr-chemical) as embeddings for linking mentions to candidates.
|
hkivancoral/hushem_40x_deit_base_sgd_001_fold3
|
hkivancoral
| 2023-12-24T02:20:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T01:39:21Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_001_fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7674418604651163
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_001_fold3
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4834
- Accuracy: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2567 | 1.0 | 217 | 1.3908 | 0.3023 |
| 1.1156 | 2.0 | 434 | 1.3183 | 0.4186 |
| 0.9891 | 3.0 | 651 | 1.2352 | 0.5116 |
| 0.902 | 4.0 | 868 | 1.1401 | 0.5814 |
| 0.7383 | 5.0 | 1085 | 1.0533 | 0.6047 |
| 0.6659 | 6.0 | 1302 | 0.9783 | 0.6279 |
| 0.577 | 7.0 | 1519 | 0.9088 | 0.6047 |
| 0.5084 | 8.0 | 1736 | 0.8504 | 0.6512 |
| 0.4618 | 9.0 | 1953 | 0.8112 | 0.6512 |
| 0.3986 | 10.0 | 2170 | 0.7644 | 0.6744 |
| 0.3262 | 11.0 | 2387 | 0.7405 | 0.6744 |
| 0.3187 | 12.0 | 2604 | 0.7073 | 0.7442 |
| 0.287 | 13.0 | 2821 | 0.6756 | 0.7442 |
| 0.2667 | 14.0 | 3038 | 0.6524 | 0.7674 |
| 0.2566 | 15.0 | 3255 | 0.6373 | 0.7674 |
| 0.2206 | 16.0 | 3472 | 0.6121 | 0.7674 |
| 0.1851 | 17.0 | 3689 | 0.6018 | 0.7674 |
| 0.1802 | 18.0 | 3906 | 0.5901 | 0.7674 |
| 0.1691 | 19.0 | 4123 | 0.5735 | 0.7674 |
| 0.1555 | 20.0 | 4340 | 0.5642 | 0.7674 |
| 0.1532 | 21.0 | 4557 | 0.5647 | 0.7907 |
| 0.1287 | 22.0 | 4774 | 0.5473 | 0.7907 |
| 0.1172 | 23.0 | 4991 | 0.5337 | 0.7907 |
| 0.1215 | 24.0 | 5208 | 0.5344 | 0.7907 |
| 0.1 | 25.0 | 5425 | 0.5177 | 0.7907 |
| 0.1218 | 26.0 | 5642 | 0.5181 | 0.7907 |
| 0.0935 | 27.0 | 5859 | 0.5065 | 0.7907 |
| 0.0833 | 28.0 | 6076 | 0.4985 | 0.7907 |
| 0.0714 | 29.0 | 6293 | 0.4998 | 0.7907 |
| 0.0825 | 30.0 | 6510 | 0.4944 | 0.7907 |
| 0.0754 | 31.0 | 6727 | 0.4956 | 0.7674 |
| 0.0765 | 32.0 | 6944 | 0.4881 | 0.7674 |
| 0.0774 | 33.0 | 7161 | 0.4958 | 0.7674 |
| 0.057 | 34.0 | 7378 | 0.4894 | 0.7674 |
| 0.0663 | 35.0 | 7595 | 0.4882 | 0.7674 |
| 0.059 | 36.0 | 7812 | 0.4848 | 0.7674 |
| 0.0537 | 37.0 | 8029 | 0.4865 | 0.7674 |
| 0.0454 | 38.0 | 8246 | 0.4882 | 0.7674 |
| 0.0514 | 39.0 | 8463 | 0.4854 | 0.7674 |
| 0.0629 | 40.0 | 8680 | 0.4861 | 0.7674 |
| 0.0453 | 41.0 | 8897 | 0.4865 | 0.7674 |
| 0.0447 | 42.0 | 9114 | 0.4837 | 0.7674 |
| 0.0452 | 43.0 | 9331 | 0.4805 | 0.7907 |
| 0.0545 | 44.0 | 9548 | 0.4818 | 0.7907 |
| 0.0444 | 45.0 | 9765 | 0.4816 | 0.7907 |
| 0.0454 | 46.0 | 9982 | 0.4835 | 0.7674 |
| 0.0369 | 47.0 | 10199 | 0.4841 | 0.7674 |
| 0.0401 | 48.0 | 10416 | 0.4827 | 0.7907 |
| 0.0524 | 49.0 | 10633 | 0.4835 | 0.7674 |
| 0.0394 | 50.0 | 10850 | 0.4834 | 0.7674 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jetro30087/OpenHermesMistral7b-JanisAICompatible-v1.01
|
jetro30087
| 2023-12-24T02:03:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-24T01:16:07Z |
The default model for the JanisAI User Interface.
Automatic Instructions: Run your JanisAIInstaller.exe or the JanisModelInstaller.exe located in your /JanisAI/ installation folder.
Manual Instructions: Copy the zip file containing the model into your /JanisAI/ installation folder. Unzip the file in your /JanisAI/ installation folder.
Janis AI uses OpenHermes-2.5-Mistral-7b. https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B
This model is formatted and quantized according to MLC-LLM standards. https://github.com/mlc-ai/mlc-llm
---
license: apache-2.0
---
|
panosdou/my_awesome_eli5_clm-model
|
panosdou
| 2023-12-24T01:48:02Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-23T22:54:25Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 220 | 3.8066 |
| No log | 2.0 | 440 | 3.7933 |
| 3.9127 | 3.0 | 660 | 3.7925 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.blue-eye-color
|
ntc-ai
| 2023-12-24T01:44:25Z | 68 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-24T01:44:22Z |
---
language:
- en
thumbnail: "images/evaluate/blue eye color...red eye color/blue eye color_17_3.0.png"
widget:
- text: blue eye color
output:
url: images/blue eye color_17_3.0.png
- text: blue eye color
output:
url: images/blue eye color_19_3.0.png
- text: blue eye color
output:
url: images/blue eye color_20_3.0.png
- text: blue eye color
output:
url: images/blue eye color_21_3.0.png
- text: blue eye color
output:
url: images/blue eye color_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "blue eye color"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - blue eye color (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/blue eye color_17_-3.0.png" width=256 height=256 /> | <img src="images/blue eye color_17_0.0.png" width=256 height=256 /> | <img src="images/blue eye color_17_3.0.png" width=256 height=256 /> |
| <img src="images/blue eye color_19_-3.0.png" width=256 height=256 /> | <img src="images/blue eye color_19_0.0.png" width=256 height=256 /> | <img src="images/blue eye color_19_3.0.png" width=256 height=256 /> |
| <img src="images/blue eye color_20_-3.0.png" width=256 height=256 /> | <img src="images/blue eye color_20_0.0.png" width=256 height=256 /> | <img src="images/blue eye color_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
blue eye color
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.blue-eye-color', weight_name='blue eye color.safetensors', adapter_name="blue eye color")
# Activate the LoRA
pipe.set_adapters(["blue eye color"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, blue eye color"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 580+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold2
|
hkivancoral
| 2023-12-24T01:44:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T00:57:43Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1410
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1698 | 1.0 | 215 | 0.9036 | 0.6667 |
| 0.0086 | 2.0 | 430 | 0.8771 | 0.8 |
| 0.0023 | 3.0 | 645 | 0.9467 | 0.8 |
| 0.0009 | 4.0 | 860 | 1.0021 | 0.7778 |
| 0.0006 | 5.0 | 1075 | 1.0414 | 0.7556 |
| 0.0004 | 6.0 | 1290 | 1.0789 | 0.7556 |
| 0.0003 | 7.0 | 1505 | 1.0927 | 0.7778 |
| 0.0002 | 8.0 | 1720 | 1.1233 | 0.7778 |
| 0.0002 | 9.0 | 1935 | 1.1652 | 0.7778 |
| 0.0001 | 10.0 | 2150 | 1.1805 | 0.7778 |
| 0.0001 | 11.0 | 2365 | 1.2046 | 0.7778 |
| 0.0001 | 12.0 | 2580 | 1.2366 | 0.7778 |
| 0.0001 | 13.0 | 2795 | 1.2540 | 0.7778 |
| 0.0001 | 14.0 | 3010 | 1.2856 | 0.7778 |
| 0.0 | 15.0 | 3225 | 1.3104 | 0.7778 |
| 0.0 | 16.0 | 3440 | 1.3434 | 0.7778 |
| 0.0 | 17.0 | 3655 | 1.3705 | 0.7778 |
| 0.0 | 18.0 | 3870 | 1.3922 | 0.7778 |
| 0.0 | 19.0 | 4085 | 1.4221 | 0.7778 |
| 0.0 | 20.0 | 4300 | 1.4557 | 0.7778 |
| 0.0 | 21.0 | 4515 | 1.4854 | 0.7778 |
| 0.0 | 22.0 | 4730 | 1.5092 | 0.7778 |
| 0.0 | 23.0 | 4945 | 1.5343 | 0.7778 |
| 0.0 | 24.0 | 5160 | 1.5541 | 0.7778 |
| 0.0 | 25.0 | 5375 | 1.5830 | 0.7778 |
| 0.0 | 26.0 | 5590 | 1.6177 | 0.7778 |
| 0.0 | 27.0 | 5805 | 1.6474 | 0.7778 |
| 0.0 | 28.0 | 6020 | 1.6634 | 0.7778 |
| 0.0 | 29.0 | 6235 | 1.6875 | 0.7778 |
| 0.0 | 30.0 | 6450 | 1.7106 | 0.7778 |
| 0.0 | 31.0 | 6665 | 1.7484 | 0.7778 |
| 0.0 | 32.0 | 6880 | 1.7797 | 0.7778 |
| 0.0 | 33.0 | 7095 | 1.8167 | 0.7778 |
| 0.0 | 34.0 | 7310 | 1.8422 | 0.7778 |
| 0.0 | 35.0 | 7525 | 1.8678 | 0.7778 |
| 0.0 | 36.0 | 7740 | 1.8865 | 0.7778 |
| 0.0 | 37.0 | 7955 | 1.9143 | 0.7778 |
| 0.0 | 38.0 | 8170 | 1.9225 | 0.7778 |
| 0.0 | 39.0 | 8385 | 1.9621 | 0.7778 |
| 0.0 | 40.0 | 8600 | 1.9777 | 0.7556 |
| 0.0 | 41.0 | 8815 | 2.0240 | 0.7778 |
| 0.0 | 42.0 | 9030 | 2.0141 | 0.7556 |
| 0.0 | 43.0 | 9245 | 2.0463 | 0.7556 |
| 0.0 | 44.0 | 9460 | 2.0688 | 0.7556 |
| 0.0 | 45.0 | 9675 | 2.0919 | 0.7556 |
| 0.0 | 46.0 | 9890 | 2.1123 | 0.7556 |
| 0.0 | 47.0 | 10105 | 2.1294 | 0.7556 |
| 0.0 | 48.0 | 10320 | 2.1354 | 0.7556 |
| 0.0 | 49.0 | 10535 | 2.1448 | 0.7556 |
| 0.0 | 50.0 | 10750 | 2.1410 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold2
|
hkivancoral
| 2023-12-24T01:34:59Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T00:59:28Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_adamax_0001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_adamax_0001_fold2
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9983
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0219 | 1.0 | 215 | 0.6349 | 0.8667 |
| 0.0009 | 2.0 | 430 | 1.0022 | 0.7333 |
| 0.0057 | 3.0 | 645 | 1.0734 | 0.7556 |
| 0.0006 | 4.0 | 860 | 1.3398 | 0.7778 |
| 0.0 | 5.0 | 1075 | 1.6890 | 0.7333 |
| 0.0 | 6.0 | 1290 | 1.6522 | 0.7111 |
| 0.0 | 7.0 | 1505 | 1.6220 | 0.7111 |
| 0.0 | 8.0 | 1720 | 1.6021 | 0.7333 |
| 0.0 | 9.0 | 1935 | 1.5870 | 0.7333 |
| 0.0 | 10.0 | 2150 | 1.5842 | 0.7333 |
| 0.0 | 11.0 | 2365 | 1.5782 | 0.7333 |
| 0.0 | 12.0 | 2580 | 1.5625 | 0.7333 |
| 0.0 | 13.0 | 2795 | 1.5601 | 0.7333 |
| 0.0 | 14.0 | 3010 | 1.5521 | 0.7333 |
| 0.0 | 15.0 | 3225 | 1.5637 | 0.7333 |
| 0.0 | 16.0 | 3440 | 1.5652 | 0.7778 |
| 0.0 | 17.0 | 3655 | 1.5622 | 0.7333 |
| 0.0 | 18.0 | 3870 | 1.5700 | 0.7778 |
| 0.0 | 19.0 | 4085 | 1.5813 | 0.7778 |
| 0.0 | 20.0 | 4300 | 1.5874 | 0.7556 |
| 0.0 | 21.0 | 4515 | 1.5931 | 0.7556 |
| 0.0 | 22.0 | 4730 | 1.6081 | 0.7556 |
| 0.0 | 23.0 | 4945 | 1.6167 | 0.7556 |
| 0.0 | 24.0 | 5160 | 1.6398 | 0.7556 |
| 0.0 | 25.0 | 5375 | 1.6448 | 0.7556 |
| 0.0 | 26.0 | 5590 | 1.6610 | 0.7556 |
| 0.0 | 27.0 | 5805 | 1.6849 | 0.7333 |
| 0.0 | 28.0 | 6020 | 1.6982 | 0.7556 |
| 0.0 | 29.0 | 6235 | 1.7059 | 0.7556 |
| 0.0 | 30.0 | 6450 | 1.7216 | 0.7556 |
| 0.0 | 31.0 | 6665 | 1.7579 | 0.7556 |
| 0.0 | 32.0 | 6880 | 1.7634 | 0.7556 |
| 0.0 | 33.0 | 7095 | 1.7775 | 0.7556 |
| 0.0 | 34.0 | 7310 | 1.8193 | 0.7556 |
| 0.0 | 35.0 | 7525 | 1.8288 | 0.7556 |
| 0.0 | 36.0 | 7740 | 1.8617 | 0.7556 |
| 0.0 | 37.0 | 7955 | 1.8992 | 0.7556 |
| 0.0 | 38.0 | 8170 | 1.9097 | 0.7556 |
| 0.0 | 39.0 | 8385 | 1.9200 | 0.7556 |
| 0.0 | 40.0 | 8600 | 1.9431 | 0.7556 |
| 0.0 | 41.0 | 8815 | 1.9378 | 0.7556 |
| 0.0 | 42.0 | 9030 | 1.9739 | 0.7556 |
| 0.0 | 43.0 | 9245 | 1.9777 | 0.7556 |
| 0.0 | 44.0 | 9460 | 1.9924 | 0.7556 |
| 0.0 | 45.0 | 9675 | 1.9923 | 0.7556 |
| 0.0 | 46.0 | 9890 | 1.9872 | 0.7556 |
| 0.0 | 47.0 | 10105 | 2.0011 | 0.7556 |
| 0.0 | 48.0 | 10320 | 2.0002 | 0.7556 |
| 0.0 | 49.0 | 10535 | 1.9945 | 0.7556 |
| 0.0 | 50.0 | 10750 | 1.9983 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
vit54155/vit-base-patch16-224-in21k-euroSat
|
vit54155
| 2023-12-24T01:18:33Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T01:16:31Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: vit54155/vit-base-patch16-224-in21k-euroSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vit54155/vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6316
- Train Accuracy: 0.6693
- Train Top-3-accuracy: 1.0
- Validation Loss: 0.6555
- Validation Accuracy: 0.6320
- Validation Top-3-accuracy: 1.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'module': 'transformers.optimization_tf', 'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 360, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}, 'registered_name': 'AdamWeightDecay'}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.6316 | 0.6693 | 1.0 | 0.6555 | 0.6320 | 1.0 | 0 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.13.0
- Datasets 2.16.0
- Tokenizers 0.15.0
|
helpmefindaname/flair-eml-sapbert-bc2gn-gene
|
helpmefindaname
| 2023-12-24T01:08:36Z | 2 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T00:04:55Z |
---
tags:
- flair
- entity-mention-linker
---
## sapbert-bc2gn-gene
Biomedical Entity Mention Linking for gene
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-sapbert-bc2gn-gene")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc2gn", "gene", dictionary_name_or_path="ncbi-gene", hybrid_search=False, entity_type="gene-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/dmis-lab/biosyn-sapbert-bc2gn](dmis-lab/biosyn-sapbert-bc2gn) as embeddings for linking mentions to candidates.
|
evilfreelancer/PavelGPT-7B-128K-v0.1-LoRA
|
evilfreelancer
| 2023-12-24T01:05:34Z | 11 | 9 |
peft
|
[
"peft",
"mistral",
"text-generation",
"custom_code",
"ru",
"en",
"dataset:d0rj/gsm8k-ru",
"dataset:d0rj/alpaca-cleaned-ru",
"dataset:IlyaGusev/ru_turbo_alpaca",
"dataset:IlyaGusev/ru_turbo_alpaca_evol_instruct",
"license:mit",
"region:us"
] |
text-generation
| 2023-12-22T15:02:46Z |
---
license: mit
datasets:
- d0rj/gsm8k-ru
- d0rj/alpaca-cleaned-ru
- IlyaGusev/ru_turbo_alpaca
- IlyaGusev/ru_turbo_alpaca_evol_instruct
language:
- ru
- en
library_name: peft
pipeline_tag: text-generation
---
# PavelGPT 7B 128K (v0.1)
PavelGPT is a Russian language model based
on [NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
so it also has 7B parameters and 128k context size.
- **Developed by:** [EvilFreelancer](https://github.com/EvilFreelancer)
- **Model type:** LoRA
- **Language(s) (NLP):** Russian, English
- **License:** MIT
- **Finetuned from model:**
[NousResearch/Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k)
## Details
### Description
The model is able to generate text in Russian, answer questions, solve simple logical puzzles and simple math
calculations, it was trained on a medium corpus of Russian instructions, manuals and other texts.
It optimized for INSTRUCT mode and it works better if you give it system prompt and only one instruction (without
history at all).
### Datasets
PavelGPT was trained on following datasets:
- [d0rj/gsm8k-ru](https://huggingface.co/datasets/d0rj/gsm8k-ru) - 7.5k rows of Russian puzzles
- [d0rj/alpaca-cleaned-ru](https://huggingface.co/datasets/d0rj/alpaca-cleaned-ru) - 51.8k rows of Russian general
instructions
- [IlyaGusev/ru_turbo_alpaca](https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca) - 29.98k rows of Russian
instructions, but was used only instructions with `ok` label (~4.5k)
- [IlyaGusev/ru_turbo_alpaca_evol_instruct](https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca_evol_instruct) -
47.8k rows of Russian instructions
All datasets was merged with help
of [create_instruct_set_v2.py](https://github.com/EvilFreelancer/saiga-custom/blob/main/data_processing/create_instruct_set_v2.py)
(hash a1151bf903990b88177d30bd1de67c7b94fdecef).
After merging and deduplication we got 46.2k rows of training data and 2.4k rows of validation data.
### Training
Model was trained on 1x RTX 4090 24GB for 1 epoch (took ~12 hours), it was trained with help of Peft library and in 4bit quantization mode.
Context size was set to 4096 tokens, batch size was set to 1, gradient accumulation was set to 128.
Training script: [pavelgpt_7b_128k.ipynb](https://github.com/EvilFreelancer/saiga-custom/blob/main/pavelgpt_7b_128k.ipynb).
Wandb report: [j6jt6cyd](https://api.wandb.ai/links/evilfreelancer/j6jt6cyd).
### Tests results
Tests was performed on following [generation configuration](./generation_config.json):
```json
{
"pad_token_id": 0,
"unk_token_id": 0,
"bos_token_id": 1,
"eos_token_id": 2,
"temperature": 0.2,
"top_p": 0.2,
"top_k": 20,
"do_sample": true,
"max_new_tokens": 4096,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 15
}
```
Here is the results:
```text
User: Привет! Кто ты?
Bot: Я - языковой модель ИИ, называемый PavelGPT. Я способен к естественному языку и обучен отвечать на вопросы по многим темам. Как правило, я использую информацию из Интернета для создания своих ответов.
User: Ты помнишь как здесь оказался?
Bot: Я был создан компанией OpenAI в качестве исследовательского проекта и запущен в декабре 2018 года. С тех пор он продолжает развивать свой алгоритм путем обучения на больших объемах данных и улучшения его функций.
User: Какой по твоему сейчас год?
Bot: В настоящее время это 2023 год.
User: А какой день?
Bot: Сейчас 6 января 2023 г.
User: Почему трава зелёная?
Bot: Зеленый цвет является результатом взаимодействия хлорофилла (пигмента) со светом. Хлорофилл может поглощать красное и синее спектральные излучения, оставляя только зеленое. Эти компоненты вместе образуют зеленый цвет растительной массы.
User: Я пошёл в магазин и купил 10 яблок. Два яблока я отдал соседу и два ремонтнику. Потом я купил ещё 5 яблок и одно из них съел. Также я отдал три банана моему брату. Сколько яблок у меня осталось?
Bot: У вас осталось 10 яблок после того, как вы продали их друзьям и брату.
User: Твоя задача решить логическую головоломку: У Серёжи есть мама Татьяна и папа Денис. И ещё у Серёжи есть брат Иван. Кем Ивану приходится Татьяна?
Bot: Мамой Ивана должна быть Татьяна.
```
## Links
* https://t.me/evilfreelancer
* https://dzen.ru/evilfreelancer
* https://github.com/EvilFreelancer
* https://youtube.com/@pavelzloi
|
toshi456/llava-pretrain-jp-1.3b-v1.0
|
toshi456
| 2023-12-24T01:05:16Z | 18 | 3 |
transformers
|
[
"transformers",
"llava",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-24T00:55:09Z |
---
license: apache-2.0
---
# LLaVA-JP Model Card
This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
## Model details
**Model type:**
LLaVA-JP is trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) on multimodal instruction-following data by LLaVA method.
## Training dataset
- [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)
## Acknowledgement
- [LLaVA](https://llava-vl.github.io/)
- [LLM-jp](https://llm-jp.nii.ac.jp/)
## License
Apache-2.0
|
hkivancoral/hushem_40x_deit_tiny_adamax_0001_fold1
|
hkivancoral
| 2023-12-24T00:59:16Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T00:23:27Z |
---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_tiny_adamax_0001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_tiny_adamax_0001_fold1
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3786
- Accuracy: 0.8444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0587 | 1.0 | 215 | 0.8379 | 0.7778 |
| 0.0029 | 2.0 | 430 | 0.8134 | 0.8222 |
| 0.0061 | 3.0 | 645 | 0.6824 | 0.8667 |
| 0.0003 | 4.0 | 860 | 0.8964 | 0.8444 |
| 0.0004 | 5.0 | 1075 | 1.1389 | 0.8 |
| 0.0069 | 6.0 | 1290 | 0.8847 | 0.8222 |
| 0.0014 | 7.0 | 1505 | 0.9407 | 0.8444 |
| 0.0208 | 8.0 | 1720 | 1.2665 | 0.8 |
| 0.0 | 9.0 | 1935 | 0.7746 | 0.8222 |
| 0.0001 | 10.0 | 2150 | 0.9541 | 0.8222 |
| 0.0 | 11.0 | 2365 | 1.3297 | 0.7556 |
| 0.0 | 12.0 | 2580 | 1.2887 | 0.7778 |
| 0.0 | 13.0 | 2795 | 1.2405 | 0.7778 |
| 0.0 | 14.0 | 3010 | 1.2098 | 0.8 |
| 0.0 | 15.0 | 3225 | 1.1905 | 0.8 |
| 0.0 | 16.0 | 3440 | 1.1775 | 0.8 |
| 0.0 | 17.0 | 3655 | 1.1699 | 0.8 |
| 0.0 | 18.0 | 3870 | 1.1668 | 0.8 |
| 0.0 | 19.0 | 4085 | 1.1651 | 0.8 |
| 0.0 | 20.0 | 4300 | 1.1645 | 0.8 |
| 0.0 | 21.0 | 4515 | 1.1663 | 0.8 |
| 0.0 | 22.0 | 4730 | 1.1709 | 0.8 |
| 0.0 | 23.0 | 4945 | 1.1752 | 0.8 |
| 0.0 | 24.0 | 5160 | 1.1807 | 0.8 |
| 0.0 | 25.0 | 5375 | 1.1874 | 0.8222 |
| 0.0 | 26.0 | 5590 | 1.1925 | 0.8222 |
| 0.0 | 27.0 | 5805 | 1.1999 | 0.8222 |
| 0.0 | 28.0 | 6020 | 1.2057 | 0.8222 |
| 0.0 | 29.0 | 6235 | 1.2150 | 0.8222 |
| 0.0 | 30.0 | 6450 | 1.2228 | 0.8222 |
| 0.0 | 31.0 | 6665 | 1.2334 | 0.8222 |
| 0.0 | 32.0 | 6880 | 1.2399 | 0.8222 |
| 0.0 | 33.0 | 7095 | 1.2440 | 0.8222 |
| 0.0 | 34.0 | 7310 | 1.2539 | 0.8222 |
| 0.0 | 35.0 | 7525 | 1.2643 | 0.8222 |
| 0.0 | 36.0 | 7740 | 1.2752 | 0.8222 |
| 0.0 | 37.0 | 7955 | 1.2837 | 0.8222 |
| 0.0 | 38.0 | 8170 | 1.2941 | 0.8222 |
| 0.0 | 39.0 | 8385 | 1.3057 | 0.8444 |
| 0.0 | 40.0 | 8600 | 1.3171 | 0.8444 |
| 0.0 | 41.0 | 8815 | 1.3233 | 0.8444 |
| 0.0 | 42.0 | 9030 | 1.3334 | 0.8444 |
| 0.0 | 43.0 | 9245 | 1.3422 | 0.8444 |
| 0.0 | 44.0 | 9460 | 1.3487 | 0.8444 |
| 0.0 | 45.0 | 9675 | 1.3569 | 0.8444 |
| 0.0 | 46.0 | 9890 | 1.3629 | 0.8444 |
| 0.0 | 47.0 | 10105 | 1.3713 | 0.8444 |
| 0.0 | 48.0 | 10320 | 1.3761 | 0.8444 |
| 0.0 | 49.0 | 10535 | 1.3795 | 0.8444 |
| 0.0 | 50.0 | 10750 | 1.3786 | 0.8444 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.1+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_base_adamax_00001_fold1
|
hkivancoral
| 2023-12-24T00:57:29Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T00:12:59Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_adamax_00001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7777777777777778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_adamax_00001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7464
- Accuracy: 0.7778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.175 | 1.0 | 215 | 0.7361 | 0.7556 |
| 0.0109 | 2.0 | 430 | 0.6497 | 0.8 |
| 0.0023 | 3.0 | 645 | 0.7453 | 0.8222 |
| 0.001 | 4.0 | 860 | 0.7854 | 0.8222 |
| 0.0006 | 5.0 | 1075 | 0.8105 | 0.8222 |
| 0.0004 | 6.0 | 1290 | 0.8328 | 0.8222 |
| 0.0003 | 7.0 | 1505 | 0.8638 | 0.8222 |
| 0.0002 | 8.0 | 1720 | 0.8701 | 0.8222 |
| 0.0002 | 9.0 | 1935 | 0.9048 | 0.8222 |
| 0.0001 | 10.0 | 2150 | 0.9203 | 0.8 |
| 0.0001 | 11.0 | 2365 | 0.9399 | 0.8 |
| 0.0001 | 12.0 | 2580 | 0.9611 | 0.8 |
| 0.0001 | 13.0 | 2795 | 0.9847 | 0.8 |
| 0.0001 | 14.0 | 3010 | 1.0078 | 0.8 |
| 0.0 | 15.0 | 3225 | 1.0165 | 0.8 |
| 0.0 | 16.0 | 3440 | 1.0509 | 0.8 |
| 0.0 | 17.0 | 3655 | 1.0662 | 0.8 |
| 0.0 | 18.0 | 3870 | 1.0960 | 0.8 |
| 0.0 | 19.0 | 4085 | 1.1102 | 0.8 |
| 0.0 | 20.0 | 4300 | 1.1333 | 0.8 |
| 0.0 | 21.0 | 4515 | 1.1560 | 0.8 |
| 0.0 | 22.0 | 4730 | 1.1835 | 0.8 |
| 0.0 | 23.0 | 4945 | 1.2066 | 0.8 |
| 0.0 | 24.0 | 5160 | 1.2238 | 0.8 |
| 0.0 | 25.0 | 5375 | 1.2452 | 0.8 |
| 0.0 | 26.0 | 5590 | 1.2607 | 0.8 |
| 0.0 | 27.0 | 5805 | 1.2985 | 0.8 |
| 0.0 | 28.0 | 6020 | 1.3142 | 0.7778 |
| 0.0 | 29.0 | 6235 | 1.3455 | 0.7778 |
| 0.0 | 30.0 | 6450 | 1.3849 | 0.7778 |
| 0.0 | 31.0 | 6665 | 1.4087 | 0.7778 |
| 0.0 | 32.0 | 6880 | 1.4316 | 0.7778 |
| 0.0 | 33.0 | 7095 | 1.4372 | 0.7778 |
| 0.0 | 34.0 | 7310 | 1.4578 | 0.7778 |
| 0.0 | 35.0 | 7525 | 1.5115 | 0.7778 |
| 0.0 | 36.0 | 7740 | 1.5151 | 0.7778 |
| 0.0 | 37.0 | 7955 | 1.5376 | 0.7778 |
| 0.0 | 38.0 | 8170 | 1.5694 | 0.7778 |
| 0.0 | 39.0 | 8385 | 1.5967 | 0.7778 |
| 0.0 | 40.0 | 8600 | 1.6099 | 0.7778 |
| 0.0 | 41.0 | 8815 | 1.6278 | 0.7778 |
| 0.0 | 42.0 | 9030 | 1.6372 | 0.7778 |
| 0.0 | 43.0 | 9245 | 1.6697 | 0.7778 |
| 0.0 | 44.0 | 9460 | 1.6889 | 0.7778 |
| 0.0 | 45.0 | 9675 | 1.6985 | 0.7778 |
| 0.0 | 46.0 | 9890 | 1.7202 | 0.7778 |
| 0.0 | 47.0 | 10105 | 1.7225 | 0.7778 |
| 0.0 | 48.0 | 10320 | 1.7406 | 0.7778 |
| 0.0 | 49.0 | 10535 | 1.7437 | 0.7778 |
| 0.0 | 50.0 | 10750 | 1.7464 | 0.7778 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
hkivancoral/hushem_40x_deit_base_sgd_001_fold1
|
hkivancoral
| 2023-12-24T00:56:49Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-24T00:13:36Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_40x_deit_base_sgd_001_fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7555555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_40x_deit_base_sgd_001_fold1
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7430
- Accuracy: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.2392 | 1.0 | 215 | 1.3895 | 0.2667 |
| 1.1003 | 2.0 | 430 | 1.3294 | 0.3333 |
| 1.0196 | 3.0 | 645 | 1.2624 | 0.4444 |
| 0.8639 | 4.0 | 860 | 1.1946 | 0.4889 |
| 0.731 | 5.0 | 1075 | 1.1313 | 0.5111 |
| 0.6646 | 6.0 | 1290 | 1.0718 | 0.5556 |
| 0.545 | 7.0 | 1505 | 1.0254 | 0.6 |
| 0.4701 | 8.0 | 1720 | 0.9800 | 0.6444 |
| 0.4065 | 9.0 | 1935 | 0.9495 | 0.6222 |
| 0.3851 | 10.0 | 2150 | 0.9148 | 0.6667 |
| 0.3271 | 11.0 | 2365 | 0.8947 | 0.6667 |
| 0.2977 | 12.0 | 2580 | 0.8732 | 0.6889 |
| 0.2671 | 13.0 | 2795 | 0.8416 | 0.7111 |
| 0.2428 | 14.0 | 3010 | 0.8450 | 0.6889 |
| 0.2387 | 15.0 | 3225 | 0.8270 | 0.7111 |
| 0.1988 | 16.0 | 3440 | 0.8218 | 0.7111 |
| 0.1804 | 17.0 | 3655 | 0.8107 | 0.7333 |
| 0.1681 | 18.0 | 3870 | 0.8058 | 0.7333 |
| 0.1475 | 19.0 | 4085 | 0.7968 | 0.7333 |
| 0.1494 | 20.0 | 4300 | 0.7851 | 0.7556 |
| 0.1288 | 21.0 | 4515 | 0.7807 | 0.7556 |
| 0.1265 | 22.0 | 4730 | 0.7751 | 0.7556 |
| 0.1136 | 23.0 | 4945 | 0.7744 | 0.7556 |
| 0.094 | 24.0 | 5160 | 0.7654 | 0.7556 |
| 0.0987 | 25.0 | 5375 | 0.7661 | 0.7556 |
| 0.096 | 26.0 | 5590 | 0.7527 | 0.7556 |
| 0.084 | 27.0 | 5805 | 0.7535 | 0.7556 |
| 0.069 | 28.0 | 6020 | 0.7589 | 0.7556 |
| 0.0764 | 29.0 | 6235 | 0.7612 | 0.7556 |
| 0.067 | 30.0 | 6450 | 0.7558 | 0.7556 |
| 0.0458 | 31.0 | 6665 | 0.7531 | 0.7333 |
| 0.0687 | 32.0 | 6880 | 0.7463 | 0.7556 |
| 0.0414 | 33.0 | 7095 | 0.7445 | 0.7556 |
| 0.0522 | 34.0 | 7310 | 0.7378 | 0.7556 |
| 0.0521 | 35.0 | 7525 | 0.7477 | 0.7556 |
| 0.0458 | 36.0 | 7740 | 0.7370 | 0.7556 |
| 0.0586 | 37.0 | 7955 | 0.7425 | 0.7556 |
| 0.0551 | 38.0 | 8170 | 0.7441 | 0.7556 |
| 0.0389 | 39.0 | 8385 | 0.7437 | 0.7556 |
| 0.0335 | 40.0 | 8600 | 0.7446 | 0.7556 |
| 0.0337 | 41.0 | 8815 | 0.7439 | 0.7556 |
| 0.0431 | 42.0 | 9030 | 0.7421 | 0.7556 |
| 0.0392 | 43.0 | 9245 | 0.7439 | 0.7556 |
| 0.03 | 44.0 | 9460 | 0.7447 | 0.7556 |
| 0.0402 | 45.0 | 9675 | 0.7426 | 0.7556 |
| 0.0313 | 46.0 | 9890 | 0.7416 | 0.7556 |
| 0.0341 | 47.0 | 10105 | 0.7428 | 0.7556 |
| 0.0375 | 48.0 | 10320 | 0.7420 | 0.7556 |
| 0.0432 | 49.0 | 10535 | 0.7428 | 0.7556 |
| 0.0389 | 50.0 | 10750 | 0.7430 | 0.7556 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jdluzen/Mistral-7B-Instruct-v0.2-GGUF
|
jdluzen
| 2023-12-24T00:17:06Z | 8 | 0 | null |
[
"gguf",
"llama.cpp",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2023-12-23T23:42:22Z |
---
license: apache-2.0
tags:
- llama.cpp
- gguf
---
Direct llama.cpp/GGUF fp16 conversion from the original safetensors https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2, context 8192.
|
helpmefindaname/flair-eml-chemicals-exact-match
|
helpmefindaname
| 2023-12-24T00:04:47Z | 4 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T00:03:34Z |
---
tags:
- flair
- entity-mention-linker
---
## chemicals-exact-match
Biomedical Entity Mention Linking for chemical
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-chemicals-exact-match")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("exact-string-match", "chemical", dictionary_name_or_path="ctd-chemicals", hybrid_search=False, entity_type="chemical-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/exact-string-match](exact-string-match) as embeddings for linking mentions to candidates.
|
helpmefindaname/flair-eml-diseases-exact-match
|
helpmefindaname
| 2023-12-24T00:03:29Z | 5 | 0 |
flair
|
[
"flair",
"pytorch",
"entity-mention-linker",
"region:us"
] | null | 2023-12-24T00:03:17Z |
---
tags:
- flair
- entity-mention-linker
---
## diseases-exact-match
Biomedical Entity Mention Linking for diseases
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
```python
from flair.data import Sentence
from flair.models import Classifier, EntityMentionLinker
sentence = Sentence("Behavioral abnormalities in the Fmr1 KO2 Mouse Model of Fragile X Syndrome")
# load hunflair to detect the entity mentions we want to link.
tagger = Classifier.load("hunflair")
tagger.predict(sentence)
# load the linker and dictionary
linker = EntityMentionLinker.load("helpmefindaname/flair-eml-diseases-exact-match")
dictionary = linker.dictionary
# find then candidates for the mentions
linker.predict(sentence)
# print the results for each entity mention:
for span in sentence.get_spans(linker.entity_label_type):
print(f"Span: {span.text}")
for candidate_label in span.get_labels(linker.label_type):
candidate = dictionary[candidate_label.value]
print(f"Candidate: {candidate.concept_name}")
```
As an alternative to downloading the already precomputed model (much storage). You can also build the model
and compute the embeddings for the dataset using:
```python
linker = EntityMentionLinker.build("exact-string-match", "diseases", dictionary_name_or_path="ctd-diseases", hybrid_search=False, entity_type="diseases-eml")
```
This will reduce the download requirements, at the cost of computation.
This EntityMentionLinker uses [https://huggingface.co/exact-string-match](exact-string-match) as embeddings for linking mentions to candidates.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.