modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 19:17:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 18:30:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rebeccaD/phi-2-role-play
|
rebeccaD
| 2024-03-20T11:33:53Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-20T11:33:47Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-role-play
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-role-play
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
InferenceIllusionist/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-iMat-GGUF
|
InferenceIllusionist
| 2024-03-20T11:29:16Z | 33 | 0 | null |
[
"gguf",
"merge",
"storywriting",
"text adventure",
"iMat",
"endpoints_compatible",
"region:us"
] | null | 2024-03-17T13:01:49Z |
---
tags:
- merge
- gguf
- storywriting
- text adventure
- iMat
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-iMat-GGUF
<b>Special request.</b> Quantized from fp32 with love.
* Quantizations made possible using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
<i>All quants are verified working prior to uploading to repo for your safety and convenience. </i>
Please note importance matrix quantizations are a work in progress, IQ3 and above is recommended for best results.
<b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed. You may need to pad this further depending on if you are running image gen or TTS as well.
Original model card can be found [here](https://huggingface.co/notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES)
|
InferenceIllusionist/Hyperion-1.5-Mistral-7B-iMat-GGUF
|
InferenceIllusionist
| 2024-03-20T11:28:12Z | 25 | 0 | null |
[
"gguf",
"conversational",
"iMat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-03T11:01:23Z |
---
tags:
- conversational
- gguf
- iMat
license: apache-2.0
---
<img src="https://i.imgur.com/P68dXux.png" width="400"/>
# Hyperion-1.5-Mistral-7B-iMat-GGUF
New importance matrix quantizations for Hyperion-1.5-Mistral-7B.
These i-quants have a better size to perplexity ratio as they were creating using an Importance Matrix file calculated from the fp16 (unquantized) gguf.
<b>All files created using latest (3/2) llama.cpp build, including IQ3_S improvements covered [here](https://github.com/ggerganov/llama.cpp/pull/5829)</b>
This model excels in the domains of science, medicine, mathematics, and computer science.
All credits to [Locutusque](https://huggingface.co/Locutusque/) for the model and [ikawrakow](https://github.com/ikawrakow) for stellar work on the new quants.
---
# Model Card for Locutusque/Hyperion-1.5-Mistral-7B

## Model Details
**Model Name**: Locutusque/Hyperion-1.5-Mistral-7B
**Base Model**: mistralai/Mistral-7B-v0.1
**Publisher**: M4-ai
**Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
**Language**: Multi-domain, English language.
**License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-1.5-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `Locutusque/Hyperion-1.5-Mistral-7B` model was fine-tuned on the Hyperion-v1.5 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Evaluation Results
Coming soon...
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-1.5-Mistral-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
## Licensing Information
This model is released under the Apache-2.0 license.
## Citation Information
If you use Locutusque/Hyperion-1.5-Mistral-7B in your research, please cite the Hyperion dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hyperion-1.5: Illuminating the Path to Advanced Reasoning with a High-Quality, Multidisciplinary Question Answering Dataset},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/hyperion-v1.5}
}
```
|
c0d3r69/latest
|
c0d3r69
| 2024-03-20T11:25:46Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-20T11:24:55Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
c0d3r69/sumit_sir
|
c0d3r69
| 2024-03-20T11:09:39Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-20T11:05:29Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
dokyoungkim/wmt19-finetuned-it-de-to-en-3
|
dokyoungkim
| 2024-03-20T11:06:15Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"fsmt",
"text2text-generation",
"tanslation",
"generated_from_trainer",
"base_model:dokyoungkim/wmt19-finetuned-it-de-to-en-2",
"base_model:finetune:dokyoungkim/wmt19-finetuned-it-de-to-en-2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-20T01:54:41Z |
---
license: apache-2.0
base_model: dokyoungkim/wmt19-finetuned-it-de-to-en-2
tags:
- tanslation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: wmt19-finetuned-it-de-to-en-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wmt19-finetuned-it-de-to-en-3
This model is a fine-tuned version of [dokyoungkim/wmt19-finetuned-it-de-to-en-2](https://huggingface.co/dokyoungkim/wmt19-finetuned-it-de-to-en-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1299
- Bleu: 47.4497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 40
- eval_batch_size: 160
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
AqeelShafy7/Whisper-Sinhala_Audio_to_Text
|
AqeelShafy7
| 2024-03-20T11:04:23Z | 186 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"trnslation",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-19T21:10:44Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- trnslation
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper-Sinhala_Audio_to_Text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper-Sinhala_Audio_to_Text
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9038
- Wer: 50.0822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0665 | 4.76 | 1000 | 0.5398 | 57.8125 |
| 0.0096 | 9.52 | 2000 | 0.6716 | 56.2089 |
| 0.0037 | 14.29 | 3000 | 0.7457 | 52.7549 |
| 0.0005 | 19.05 | 4000 | 0.8000 | 51.1513 |
| 0.002 | 23.81 | 5000 | 0.8057 | 51.6859 |
| 0.0005 | 28.57 | 6000 | 0.8150 | 50.3289 |
| 0.0005 | 33.33 | 7000 | 0.8445 | 51.0280 |
| 0.0 | 38.1 | 8000 | 0.8773 | 50.1234 |
| 0.0 | 42.86 | 9000 | 0.8944 | 50.1234 |
| 0.0 | 47.62 | 10000 | 0.9038 | 50.0822 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
peldrak/maskformer-large-ade-finetuned-coastTrain-grCoastline
|
peldrak
| 2024-03-20T11:01:34Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"maskformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T10:00:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hlabedade/unit3_coursrl
|
hlabedade
| 2024-03-20T11:01:14Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-16T10:02:50Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 612.50 +/- 207.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hlabedade -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga hlabedade -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hlabedade
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
GS12321/WFWParings
|
GS12321
| 2024-03-20T11:00:34Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:adapter:mlabonne/NeuralHermes-2.5-Mistral-7B",
"region:us"
] | null | 2024-02-29T16:44:33Z |
---
library_name: peft
base_model: mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
Komala/hpv2_finetuned-llama-7b-chat-hf
|
Komala
| 2024-03-20T10:59:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-03-19T12:48:15Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: hpv2_finetuned-llama-7b-chat-hf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hpv2_finetuned-llama-7b-chat-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Yorick/textual_inversion_cat1
|
Yorick
| 2024-03-20T10:59:08Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-19T02:01:57Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Yorick/textual_inversion_cat1
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
rorschach-40/home-batch_5_2000_-text-classification
|
rorschach-40
| 2024-03-20T10:59:05Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:55:33Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_5_2000_-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_5_2000_-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4284
- Precision: 0.8814
- Recall: 0.9286
- F1: 0.9043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 27 | 0.3932 | 0.8947 | 0.9107 | 0.9027 |
| No log | 2.0 | 54 | 0.4284 | 0.8814 | 0.9286 | 0.9043 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
baris-yazici/my_not_so_awesome_model
|
baris-yazici
| 2024-03-20T10:52:02Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-10T14:21:13Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_not_so_awesome_model
results: []
datasets:
- imdb
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_not_so_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on a small subset (n=1000) of the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4475
- Accuracy: 0.834
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 32 | 0.6046 | 0.632 |
| No log | 2.0 | 64 | 0.4475 | 0.834 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
c0d3r69/eptl_llama2_finetuned
|
c0d3r69
| 2024-03-20T10:51:36Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2024-03-20T10:42:42Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
- load_in_4bit: True
- load_in_8bit: False
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Subramanya3/Mistral-7B-shawgpt-ft
|
Subramanya3
| 2024-03-20T10:51:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T10:51:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yorick/textual_inversion_couple
|
Yorick
| 2024-03-20T10:50:17Z | 8 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"diffusers-training",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-19T02:35:07Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
- diffusers-training
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - Yorick/textual_inversion_couple
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
rorschach-40/home-batch_4_2000_-text-classification
|
rorschach-40
| 2024-03-20T10:50:01Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:46:25Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_4_2000_-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_4_2000_-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4363
- Precision: 0.9091
- Recall: 0.9804
- F1: 0.9434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 25 | 0.3185 | 0.8929 | 0.9804 | 0.9346 |
| No log | 2.0 | 50 | 0.4363 | 0.9091 | 0.9804 | 0.9434 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
adasgaleus/20240320102435_big_hinton
|
adasgaleus
| 2024-03-20T10:48:29Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-20T10:48:09Z |
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: 20240320102435_big_hinton
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240320102435_big_hinton
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0351
- Precision: 0.9436
- Recall: 0.9308
- F1: 0.9372
- Accuracy: 0.9859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 69
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 350
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0805 | 0.09 | 300 | 0.0626 | 0.9020 | 0.8843 | 0.8931 | 0.9758 |
| 0.0969 | 0.18 | 600 | 0.0770 | 0.8912 | 0.8486 | 0.8694 | 0.9704 |
| 0.0879 | 0.27 | 900 | 0.0682 | 0.8943 | 0.8733 | 0.8837 | 0.9735 |
| 0.0778 | 0.36 | 1200 | 0.0612 | 0.9013 | 0.8891 | 0.8952 | 0.9762 |
| 0.0703 | 0.44 | 1500 | 0.0564 | 0.9137 | 0.8909 | 0.9021 | 0.9779 |
| 0.0638 | 0.53 | 1800 | 0.0521 | 0.9244 | 0.8975 | 0.9107 | 0.9799 |
| 0.0579 | 0.62 | 2100 | 0.0480 | 0.9309 | 0.9029 | 0.9167 | 0.9812 |
| 0.0534 | 0.71 | 2400 | 0.0447 | 0.9323 | 0.9095 | 0.9208 | 0.9825 |
| 0.049 | 0.8 | 2700 | 0.0399 | 0.9329 | 0.9236 | 0.9282 | 0.9841 |
| 0.0451 | 0.89 | 3000 | 0.0373 | 0.9411 | 0.9226 | 0.9318 | 0.9849 |
| 0.0424 | 0.98 | 3300 | 0.0351 | 0.9436 | 0.9308 | 0.9372 | 0.9859 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.0a0+6a974be
- Datasets 2.18.0
- Tokenizers 0.15.2
|
llm1234/finetunedtinylalma2
|
llm1234
| 2024-03-20T10:45:58Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-03-20T10:44:49Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
pepijn223/Taxi-v3
|
pepijn223
| 2024-03-20T10:45:30Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-20T10:27:29Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="pepijn223/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
quocviethere/mbert-finetuned-squadv2
|
quocviethere
| 2024-03-20T10:44:13Z | 118 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-20T09:30:25Z |
---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
- generated_from_trainer
model-index:
- name: mbert-finetuned-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetuned-squadv2
This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
rorschach-40/home-batch_3_2000_-text-classification
|
rorschach-40
| 2024-03-20T10:40:58Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:37:24Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_3_2000_-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_3_2000_-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4922
- Precision: 0.8654
- Recall: 0.9375
- F1: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 25 | 0.4399 | 0.8776 | 0.8958 | 0.8866 |
| No log | 2.0 | 50 | 0.4922 | 0.8654 | 0.9375 | 0.9 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
huskyhong/noname-ai-v2_3-light
|
huskyhong
| 2024-03-20T10:39:47Z | 128 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-03-15T10:58:04Z |
---
license: apache-2.0
language:
- zh
---
finetuned from Qwen
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("huskyhong/noname-ai-v2_3-light", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v2_3-light", device_map="auto", trust_remote_code=True).eval() # 采用gpu加载模型
# model = AutoModelForCausalLM.from_pretrained("huskyhong/noname-ai-v2_3-light", device_map="cpu", trust_remote_code=True).float() # 采用cpu加载模型
model.generation_config = GenerationConfig.from_pretrained("huskyhong/noname-ai-v2_3-light", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参
prompt = "请帮我编写一个技能,技能效果如下:" + input("请输入技能效果:")
response, history = model.chat(tokenizer, prompt, history = [])
print(response)
prompt = "请帮我编写一张卡牌,卡牌效果如下:" + input("请输入卡牌效果:")
response, history = model.chat(tokenizer, prompt, history = [])
print(response)
```
|
oscarmv/finetuned
|
oscarmv
| 2024-03-20T10:37:49Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-18T08:37:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gelukuMLG/Fimbulvetr-V2-Kuro-Lotus-bf16
|
gelukuMLG
| 2024-03-20T10:35:46Z | 16 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T09:48:59Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Fimbulvetr-V2-Kuro-Lotus
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* D:\MK\Models\Fimbulvetr-11B-v2
* D:\MK\Models\Kuro-Lotus-10.7B
### Original Models
* https://huggingface.co/saishf/Kuro-Lotus-10.7B
* https://huggingface.co/Sao10K/Fimbulvetr-11B-v2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: D:\MK\Models\Kuro-Lotus-10.7B
layer_range: [0, 48]
- model: D:\MK\Models\Fimbulvetr-11B-v2
layer_range: [0, 48]
merge_method: slerp
base_model: D:\MK\Models\Kuro-Lotus-10.7B
parameters:
t:
- filter: self_attn
value: [0.6, 0.7, 0.8, 0.9, 1]
- filter: mlp
value: [0.4, 0.3, 0.2, 0.1, 0]
- value: 0.5
dtype: bfloat16
```
|
AlignmentResearch/robust_llm_pythia-imdb-70m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T10:34:54Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:finetune:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:34:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-70m-deduped
model-index:
- name: robust_llm_pythia-imdb-70m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-70m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
damon-dev/damon-ai
|
damon-dev
| 2024-03-20T10:33:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T09:08:06Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kudod/xlm-roberta-large-finetuned-19March
|
Kudod
| 2024-03-20T10:31:42Z | 26 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-19T15:18:08Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-large-finetuned-19March
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-finetuned-19March
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Best F1: 75.2859
- Loss: 3.7291
- Exact: 38.1052
- F1: 56.2024
- Total: 3821
- Hasans Exact: 54.7305
- Hasans F1: 80.7951
- Hasans Total: 2653
- Noans Exact: 0.3425
- Noans F1: 0.3425
- Noans Total: 1168
- Best Exact: 58.8589
- Best Exact Thresh: 0.5893
- Best F1 Thresh: 0.9986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Best F1 | Validation Loss | Exact | F1 | Total | Hasans Exact | Hasans F1 | Hasans Total | Noans Exact | Noans F1 | Noans Total | Best Exact | Best Exact Thresh | Best F1 Thresh |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|:-------:|:-----:|:------------:|:---------:|:------------:|:-----------:|:--------:|:-----------:|:----------:|:-----------------:|:--------------:|
| 2.8178 | 0.11 | 200 | 46.8496 | 1.8065 | 25.3599 | 44.8486 | 3821 | 36.5247 | 64.5935 | 2653 | 0.0 | 0.0 | 1168 | 35.5666 | 0.8043 | 0.9449 |
| 1.7382 | 0.21 | 400 | 52.1438 | 1.5589 | 31.0913 | 49.9881 | 3821 | 44.7795 | 71.9957 | 2653 | 0.0 | 0.0 | 1168 | 39.5970 | 0.8782 | 0.9368 |
| 1.5846 | 0.32 | 600 | 55.6720 | 1.5377 | 34.7291 | 52.9053 | 3821 | 50.0188 | 76.1971 | 2653 | 0.0 | 0.0 | 1168 | 42.3973 | 0.7223 | 0.8983 |
| 1.3941 | 0.43 | 800 | 56.0026 | 1.5137 | 33.5514 | 52.0941 | 3821 | 48.3227 | 75.0289 | 2653 | 0.0 | 0.0 | 1168 | 43.4441 | 0.7816 | 0.9125 |
| 1.3771 | 0.54 | 1000 | 62.2027 | 1.3178 | 34.8076 | 53.0440 | 3821 | 50.1319 | 76.3970 | 2653 | 0.0 | 0.0 | 1168 | 48.2596 | 0.8079 | 0.8816 |
| 1.3422 | 0.64 | 1200 | 61.6569 | 1.3593 | 36.7705 | 54.3557 | 3821 | 52.9589 | 78.2862 | 2653 | 0.0 | 0.0 | 1168 | 49.3065 | 0.6991 | 0.8115 |
| 1.2506 | 0.75 | 1400 | 67.1569 | 1.1634 | 36.4826 | 54.6555 | 3821 | 52.5443 | 78.7180 | 2653 | 0.0 | 0.0 | 1168 | 53.4415 | 0.8273 | 0.9368 |
| 1.2003 | 0.86 | 1600 | 68.0239 | 1.1864 | 38.0005 | 55.5455 | 3821 | 54.4666 | 79.7359 | 2653 | 0.5993 | 0.5993 | 1168 | 53.9388 | 0.8636 | 0.9244 |
| 1.2101 | 0.97 | 1800 | 69.7667 | 1.1769 | 37.8958 | 56.0515 | 3821 | 54.5797 | 80.7285 | 2653 | 0.0 | 0.0 | 1168 | 55.3258 | 0.9193 | 0.9518 |
| 1.0566 | 1.07 | 2000 | 68.7591 | 1.2100 | 38.1314 | 55.7480 | 3821 | 54.9190 | 80.2914 | 2653 | 0.0 | 0.0 | 1168 | 54.3052 | 0.7215 | 0.8240 |
| 0.9504 | 1.18 | 2200 | 69.5176 | 1.1620 | 37.8173 | 55.3358 | 3821 | 54.4666 | 79.6977 | 2653 | 0.0 | 0.0 | 1168 | 55.0118 | 0.8300 | 0.8945 |
| 0.9177 | 1.29 | 2400 | 71.4471 | 1.1401 | 38.4978 | 56.1115 | 3821 | 55.4467 | 80.8150 | 2653 | 0.0 | 0.0 | 1168 | 57.1055 | 0.7949 | 0.8881 |
| 0.9203 | 1.4 | 2600 | 71.8718 | 1.1977 | 38.4978 | 56.1517 | 3821 | 55.4467 | 80.8729 | 2653 | 0.0 | 0.0 | 1168 | 56.8699 | 0.7610 | 0.8631 |
| 0.9513 | 1.5 | 2800 | 71.7460 | 1.1057 | 38.2361 | 55.9155 | 3821 | 54.9943 | 80.4572 | 2653 | 0.1712 | 0.1712 | 1168 | 56.9484 | 0.7965 | 0.8897 |
| 0.8996 | 1.61 | 3000 | 72.6287 | 1.1207 | 38.2884 | 55.7625 | 3821 | 55.1451 | 80.3122 | 2653 | 0.0 | 0.0 | 1168 | 57.6812 | 0.8633 | 0.9512 |
| 0.9045 | 1.72 | 3200 | 72.1882 | 1.1152 | 39.0212 | 56.3236 | 3821 | 56.2005 | 81.1205 | 2653 | 0.0 | 0.0 | 1168 | 57.7859 | 0.7800 | 0.8888 |
| 0.9005 | 1.82 | 3400 | 72.1757 | 1.1551 | 39.0474 | 56.2213 | 3821 | 56.1251 | 80.8599 | 2653 | 0.2568 | 0.2568 | 1168 | 57.1840 | 0.8174 | 0.9516 |
| 0.9102 | 1.93 | 3600 | 72.9329 | 1.1191 | 38.7071 | 56.4978 | 3821 | 55.7482 | 81.3714 | 2653 | 0.0 | 0.0 | 1168 | 57.8121 | 0.8652 | 0.9541 |
| 0.8203 | 2.04 | 3800 | 73.4690 | 1.1953 | 39.3091 | 56.5349 | 3821 | 56.6152 | 81.4247 | 2653 | 0.0 | 0.0 | 1168 | 58.2047 | 0.8819 | 0.9430 |
| 0.6482 | 2.15 | 4000 | 73.9489 | 1.1673 | 38.3407 | 56.1575 | 3821 | 55.2205 | 80.8812 | 2653 | 0.0 | 0.0 | 1168 | 57.8906 | 0.6748 | 0.9039 |
| 0.6331 | 2.25 | 4200 | 73.9252 | 1.1596 | 39.0997 | 56.3727 | 3821 | 56.3136 | 81.1912 | 2653 | 0.0 | 0.0 | 1168 | 58.5449 | 0.8977 | 0.9269 |
| 0.6239 | 2.36 | 4400 | 73.6730 | 1.1594 | 38.8903 | 56.4217 | 3821 | 55.9744 | 81.2240 | 2653 | 0.0856 | 0.0856 | 1168 | 58.0738 | 0.8784 | 0.9743 |
| 0.6572 | 2.47 | 4600 | 72.7751 | 1.1498 | 39.1259 | 55.9415 | 3821 | 56.2759 | 80.4948 | 2653 | 0.1712 | 0.1712 | 1168 | 58.4664 | 0.7339 | 0.8944 |
| 0.6652 | 2.58 | 4800 | 73.7635 | 1.1811 | 39.2306 | 56.3404 | 3821 | 56.4267 | 81.0692 | 2653 | 0.1712 | 0.1712 | 1168 | 58.2832 | 0.8527 | 0.8606 |
| 0.6604 | 2.68 | 5000 | 73.2122 | 1.1319 | 39.5446 | 56.4206 | 3821 | 56.9544 | 81.2601 | 2653 | 0.0 | 0.0 | 1168 | 58.7281 | 0.7900 | 0.9177 |
| 0.6514 | 2.79 | 5200 | 74.2678 | 1.2162 | 39.1521 | 56.5326 | 3821 | 56.3890 | 81.4215 | 2653 | 0.0 | 0.0 | 1168 | 59.2253 | 0.8502 | 0.9812 |
| 0.6718 | 2.9 | 5400 | 74.6439 | 1.1330 | 39.4138 | 56.6473 | 3821 | 56.7659 | 81.5867 | 2653 | 0.0 | 0.0 | 1168 | 59.5394 | 0.8374 | 0.9469 |
| 0.643 | 3.0 | 5600 | 73.0242 | 1.2631 | 37.8435 | 55.6916 | 3821 | 54.3159 | 80.0217 | 2653 | 0.4281 | 0.4281 | 1168 | 57.5766 | 0.7596 | 0.8457 |
| 0.4361 | 3.11 | 5800 | 74.1499 | 1.3032 | 39.4661 | 56.5452 | 3821 | 56.7282 | 81.3266 | 2653 | 0.2568 | 0.2568 | 1168 | 59.0683 | 0.7984 | 0.8484 |
| 0.4238 | 3.22 | 6000 | 74.5952 | 1.3679 | 38.9950 | 56.3652 | 3821 | 56.1628 | 81.1804 | 2653 | 0.0 | 0.0 | 1168 | 59.3824 | 0.7710 | 0.9094 |
| 0.4468 | 3.33 | 6200 | 74.4299 | 1.3699 | 38.3931 | 56.3625 | 3821 | 55.2959 | 81.1764 | 2653 | 0.0 | 0.0 | 1168 | 58.2570 | 0.7611 | 0.8728 |
| 0.4625 | 3.43 | 6400 | 74.7995 | 1.3095 | 38.6810 | 56.6461 | 3821 | 55.7105 | 81.5849 | 2653 | 0.0 | 0.0 | 1168 | 59.2253 | 0.7687 | 0.8944 |
| 0.4634 | 3.54 | 6600 | 74.5887 | 1.4208 | 39.7802 | 57.0180 | 3821 | 56.6152 | 81.4421 | 2653 | 1.5411 | 1.5411 | 1168 | 59.2515 | 0.7964 | 0.8398 |
| 0.47 | 3.65 | 6800 | 74.3833 | 1.3648 | 39.1521 | 56.3557 | 3821 | 56.3136 | 81.0912 | 2653 | 0.1712 | 0.1712 | 1168 | 59.1730 | 0.8667 | 0.9015 |
| 0.4598 | 3.76 | 7000 | 74.4817 | 1.3067 | 39.1782 | 56.2569 | 3821 | 56.4267 | 81.0244 | 2653 | 0.0 | 0.0 | 1168 | 59.5656 | 0.7476 | 0.9250 |
| 0.4608 | 3.86 | 7200 | 74.4170 | 1.3304 | 38.7857 | 56.1282 | 3821 | 55.8613 | 80.8390 | 2653 | 0.0 | 0.0 | 1168 | 58.8328 | 0.7717 | 0.8846 |
| 0.4743 | 3.97 | 7400 | 74.4807 | 1.3145 | 39.8063 | 56.8286 | 3821 | 56.9167 | 81.4332 | 2653 | 0.9418 | 0.9418 | 1168 | 59.5394 | 0.7264 | 0.8104 |
| 0.3466 | 4.08 | 7600 | 74.2807 | 1.5695 | 38.0529 | 55.9634 | 3821 | 54.7305 | 80.5262 | 2653 | 0.1712 | 0.1712 | 1168 | 58.6234 | 0.6575 | 0.8711 |
| 0.3209 | 4.19 | 7800 | 74.4014 | 1.6007 | 39.2829 | 56.8477 | 3821 | 56.0121 | 81.3099 | 2653 | 1.2842 | 1.2842 | 1168 | 59.1468 | 0.7379 | 0.8345 |
| 0.2965 | 4.29 | 8000 | 75.1669 | 1.6125 | 39.7016 | 56.9376 | 3821 | 56.3890 | 81.2131 | 2653 | 1.7979 | 1.7979 | 1168 | 59.8273 | 0.8465 | 0.8573 |
| 0.323 | 4.4 | 8200 | 75.2468 | 1.5257 | 39.5185 | 56.3139 | 3821 | 56.8790 | 81.0688 | 2653 | 0.0856 | 0.0856 | 1168 | 60.5601 | 0.8994 | 0.9968 |
| 0.3188 | 4.51 | 8400 | 74.5531 | 1.5630 | 38.4193 | 56.2742 | 3821 | 54.9943 | 80.7100 | 2653 | 0.7705 | 0.7705 | 1168 | 58.9898 | 0.6921 | 0.9107 |
| 0.3316 | 4.61 | 8600 | 73.7564 | 1.6488 | 38.7071 | 56.8133 | 3821 | 54.6928 | 80.7703 | 2653 | 2.3973 | 2.3973 | 1168 | 57.6027 | 0.6885 | 0.8446 |
| 0.3335 | 4.72 | 8800 | 75.0539 | 1.5713 | 39.8063 | 57.6728 | 3821 | 55.8236 | 81.5560 | 2653 | 3.4247 | 3.4247 | 1168 | 59.0421 | 0.6784 | 0.9024 |
| 0.3062 | 4.83 | 9000 | 73.9140 | 1.6366 | 38.4193 | 56.5598 | 3821 | 54.4289 | 80.5560 | 2653 | 2.0548 | 2.0548 | 1168 | 58.0738 | 0.6447 | 0.9738 |
| 0.3317 | 4.94 | 9200 | 75.1317 | 1.5375 | 40.6438 | 57.9963 | 3821 | 56.3890 | 81.3811 | 2653 | 4.8801 | 4.8801 | 1168 | 59.1992 | 0.8043 | 0.8979 |
| 0.2665 | 5.04 | 9400 | 74.5945 | 1.7715 | 42.1879 | 59.9249 | 3821 | 55.7105 | 81.2563 | 2653 | 11.4726 | 11.4726 | 1168 | 58.6496 | 0.7039 | 0.8899 |
| 0.2044 | 5.15 | 9600 | 74.6704 | 2.0130 | 39.3876 | 57.1735 | 3821 | 55.4844 | 81.1006 | 2653 | 2.8253 | 2.8253 | 1168 | 58.7804 | 0.5561 | 0.9753 |
| 0.2035 | 5.26 | 9800 | 73.9333 | 1.9572 | 40.1727 | 58.0513 | 3821 | 54.6551 | 80.4048 | 2653 | 7.2774 | 7.2774 | 1168 | 58.1000 | 0.6755 | 0.7745 |
| 0.237 | 5.37 | 10000 | 74.7114 | 1.9111 | 40.0157 | 58.0402 | 3821 | 54.3913 | 80.3512 | 2653 | 7.3630 | 7.3630 | 1168 | 58.6757 | 0.6207 | 0.9428 |
| 0.2194 | 5.47 | 10200 | 74.5000 | 1.9111 | 38.8380 | 56.3116 | 3821 | 55.3336 | 80.5001 | 2653 | 1.3699 | 1.3699 | 1168 | 58.7543 | 0.6829 | 0.9918 |
| 0.243 | 5.58 | 10400 | 74.6447 | 1.7084 | 38.1576 | 56.2303 | 3821 | 54.8059 | 80.8353 | 2653 | 0.3425 | 0.3425 | 1168 | 58.2832 | 0.5820 | 0.7634 |
| 0.2261 | 5.69 | 10600 | 75.2228 | 1.6893 | 44.4125 | 62.3606 | 3821 | 53.8259 | 79.6757 | 2653 | 23.0308 | 23.0308 | 1168 | 58.8589 | 0.7203 | 0.9326 |
| 0.2411 | 5.8 | 10800 | 75.1561 | 1.7086 | 39.2567 | 56.7270 | 3821 | 55.7482 | 80.9099 | 2653 | 1.7979 | 1.7979 | 1168 | 59.3300 | 0.7076 | 0.9906 |
| 0.2266 | 5.9 | 11000 | 74.8371 | 1.8812 | 41.1672 | 58.9705 | 3821 | 55.0697 | 80.7110 | 2653 | 9.5890 | 9.5890 | 1168 | 58.8851 | 0.7277 | 0.9859 |
| 0.2262 | 6.01 | 11200 | 74.9561 | 1.9699 | 40.0157 | 58.2772 | 3821 | 54.5420 | 80.8432 | 2653 | 7.0205 | 7.0205 | 1168 | 58.6234 | 0.6622 | 0.9921 |
| 0.1435 | 6.12 | 11400 | 75.2732 | 2.3215 | 41.4813 | 59.1006 | 3821 | 55.5974 | 80.9738 | 2653 | 9.4178 | 9.4178 | 1168 | 59.3300 | 0.6085 | 0.9580 |
| 0.1562 | 6.22 | 11600 | 74.8525 | 2.2761 | 37.7126 | 56.3116 | 3821 | 53.4112 | 80.1984 | 2653 | 2.0548 | 2.0548 | 1168 | 57.8906 | 0.9478 | 0.9993 |
| 0.1602 | 6.33 | 11800 | 75.1296 | 2.2181 | 41.5860 | 59.5824 | 3821 | 54.5797 | 80.4992 | 2653 | 12.0719 | 12.0719 | 1168 | 58.9113 | 0.9592 | 0.9972 |
| 0.1617 | 6.44 | 12000 | 74.7754 | 2.1303 | 37.6865 | 56.0801 | 3821 | 54.2405 | 80.7320 | 2653 | 0.0856 | 0.0856 | 1168 | 58.0738 | 0.6140 | 0.9971 |
| 0.1732 | 6.55 | 12200 | 75.7393 | 2.0434 | 38.6025 | 56.5949 | 3821 | 55.3336 | 81.2473 | 2653 | 0.5993 | 0.5993 | 1168 | 59.5917 | 0.6486 | 0.9946 |
| 0.1268 | 6.65 | 12400 | 74.6427 | 2.2969 | 37.4509 | 55.7997 | 3821 | 53.7505 | 80.1774 | 2653 | 0.4281 | 0.4281 | 1168 | 57.9429 | 0.5942 | 0.8802 |
| 0.1588 | 6.76 | 12600 | 74.9582 | 2.1332 | 38.1052 | 56.7031 | 3821 | 53.9389 | 80.7246 | 2653 | 2.1404 | 2.1404 | 1168 | 58.2570 | 0.5290 | 0.9927 |
| 0.1623 | 6.87 | 12800 | 75.0142 | 2.0222 | 39.3876 | 56.9883 | 3821 | 55.3336 | 80.6831 | 2653 | 3.1678 | 3.1678 | 1168 | 58.9113 | 0.8747 | 0.8747 |
| 0.148 | 6.98 | 13000 | 75.1339 | 2.0930 | 38.2099 | 56.1811 | 3821 | 54.7305 | 80.6137 | 2653 | 0.6849 | 0.6849 | 1168 | 58.6234 | 0.6673 | 0.9933 |
| 0.1309 | 7.08 | 13200 | 75.4867 | 2.4402 | 42.1094 | 59.9857 | 3821 | 54.6174 | 80.3638 | 2653 | 13.6986 | 13.6986 | 1168 | 59.1730 | 0.6728 | 0.9612 |
| 0.1173 | 7.19 | 13400 | 74.7539 | 2.7111 | 42.2141 | 59.6892 | 3821 | 55.4844 | 80.6531 | 2653 | 12.0719 | 12.0719 | 1168 | 58.5449 | 0.5282 | 0.9707 |
| 0.108 | 7.3 | 13600 | 75.4562 | 2.4802 | 41.4551 | 59.4454 | 3821 | 54.5420 | 80.4526 | 2653 | 11.7295 | 11.7295 | 1168 | 58.5972 | 0.6205 | 0.9876 |
| 0.0985 | 7.4 | 13800 | 75.5736 | 2.8397 | 41.2196 | 59.1842 | 3821 | 54.7682 | 80.6419 | 2653 | 10.4452 | 10.4452 | 1168 | 59.0683 | 0.8408 | 0.9942 |
| 0.1144 | 7.51 | 14000 | 74.9702 | 2.5953 | 38.8380 | 57.0815 | 3821 | 53.9766 | 80.2519 | 2653 | 4.4521 | 4.4521 | 1168 | 58.4140 | 0.5533 | 0.7640 |
| 0.1067 | 7.62 | 14200 | 75.4923 | 2.7441 | 38.6810 | 56.2112 | 3821 | 55.1451 | 80.3931 | 2653 | 1.2842 | 1.2842 | 1168 | 59.5394 | 0.8269 | 1.0000 |
| 0.1127 | 7.73 | 14400 | 74.7363 | 2.8387 | 37.8958 | 55.8558 | 3821 | 54.3913 | 80.2583 | 2653 | 0.4281 | 0.4281 | 1168 | 58.5449 | 0.4981 | 0.9928 |
| 0.1111 | 7.83 | 14600 | 75.0496 | 2.8232 | 38.8380 | 56.3759 | 3821 | 55.7859 | 81.0449 | 2653 | 0.3425 | 0.3425 | 1168 | 58.8589 | 0.6597 | 0.9983 |
| 0.104 | 7.94 | 14800 | 75.2988 | 2.7491 | 38.8903 | 56.3024 | 3821 | 55.8236 | 80.9014 | 2653 | 0.4281 | 0.4281 | 1168 | 59.4085 | 0.9766 | 0.9954 |
| 0.0988 | 8.05 | 15000 | 75.0794 | 2.9967 | 38.8642 | 56.1519 | 3821 | 55.7482 | 80.6470 | 2653 | 0.5137 | 0.5137 | 1168 | 59.1468 | 0.6109 | 0.9883 |
| 0.0627 | 8.16 | 15200 | 74.9803 | 3.1843 | 38.5501 | 56.4955 | 3821 | 54.7682 | 80.6142 | 2653 | 1.7123 | 1.7123 | 1168 | 58.8851 | 0.5983 | 0.9990 |
| 0.0511 | 8.26 | 15400 | 75.0023 | 3.3279 | 38.4716 | 56.3207 | 3821 | 54.7682 | 80.4754 | 2653 | 1.4555 | 1.4555 | 1168 | 58.6496 | 0.6087 | 0.9914 |
| 0.081 | 8.37 | 15600 | 75.0066 | 3.3160 | 37.9482 | 56.0321 | 3821 | 54.6174 | 80.6629 | 2653 | 0.0856 | 0.0856 | 1168 | 58.8589 | 0.6251 | 0.6604 |
| 0.0909 | 8.48 | 15800 | 74.9020 | 3.2023 | 37.7650 | 56.0174 | 3821 | 54.2405 | 80.5286 | 2653 | 0.3425 | 0.3425 | 1168 | 58.2308 | 0.6750 | 0.9895 |
| 0.0724 | 8.59 | 16000 | 75.1556 | 3.2594 | 39.2829 | 57.3387 | 3821 | 54.6928 | 80.6978 | 2653 | 4.2808 | 4.2808 | 1168 | 58.7281 | 0.5745 | 1.0000 |
| 0.0793 | 8.69 | 16200 | 75.2078 | 3.2888 | 38.2622 | 56.1814 | 3821 | 54.8059 | 80.6141 | 2653 | 0.6849 | 0.6849 | 1168 | 59.0160 | 0.8687 | 1.0000 |
| 0.0627 | 8.8 | 16400 | 75.3907 | 3.4785 | 39.0212 | 56.9735 | 3821 | 54.7682 | 80.6240 | 2653 | 3.2534 | 3.2534 | 1168 | 58.8589 | 0.6609 | 0.9997 |
| 0.0934 | 8.91 | 16600 | 75.4373 | 3.3474 | 38.4454 | 56.2844 | 3821 | 55.1451 | 80.8378 | 2653 | 0.5137 | 0.5137 | 1168 | 58.8589 | 0.9383 | 0.9991 |
| 0.0583 | 9.01 | 16800 | 75.2529 | 3.4352 | 38.4454 | 55.9520 | 3821 | 55.3336 | 80.5475 | 2653 | 0.0856 | 0.0856 | 1168 | 59.1992 | 0.7693 | 0.9870 |
| 0.0427 | 9.12 | 17000 | 75.3640 | 3.4907 | 38.4716 | 56.5872 | 3821 | 54.5797 | 80.6709 | 2653 | 1.8836 | 1.8836 | 1168 | 59.0160 | 0.8924 | 1.0000 |
| 0.046 | 9.23 | 17200 | 75.1963 | 3.5282 | 38.4454 | 56.5199 | 3821 | 54.6174 | 80.6493 | 2653 | 1.7123 | 1.7123 | 1168 | 58.7804 | 0.6665 | 1.0000 |
| 0.042 | 9.34 | 17400 | 75.2151 | 3.6017 | 37.8697 | 56.0853 | 3821 | 54.3159 | 80.5511 | 2653 | 0.5137 | 0.5137 | 1168 | 58.4402 | 0.8206 | 0.9998 |
| 0.0466 | 9.44 | 17600 | 75.4089 | 3.5608 | 38.1052 | 56.1973 | 3821 | 54.8059 | 80.8631 | 2653 | 0.1712 | 0.1712 | 1168 | 58.8851 | 0.9627 | 0.9795 |
| 0.0502 | 9.55 | 17800 | 75.3440 | 3.6178 | 38.2884 | 56.2233 | 3821 | 55.1074 | 80.9382 | 2653 | 0.0856 | 0.0856 | 1168 | 58.9636 | 0.7981 | 0.9991 |
| 0.0505 | 9.66 | 18000 | 75.2088 | 3.7243 | 37.9482 | 56.0745 | 3821 | 54.6551 | 80.7616 | 2653 | 0.0 | 0.0 | 1168 | 58.5449 | 0.5150 | 0.9954 |
| 0.0426 | 9.77 | 18200 | 75.2649 | 3.7307 | 37.9220 | 56.0874 | 3821 | 54.5797 | 80.7425 | 2653 | 0.0856 | 0.0856 | 1168 | 58.7543 | 0.4981 | 0.9938 |
| 0.0536 | 9.87 | 18400 | 75.2783 | 3.7090 | 37.9220 | 56.1133 | 3821 | 54.5797 | 80.7799 | 2653 | 0.0856 | 0.0856 | 1168 | 58.8851 | 0.7739 | 0.9990 |
| 0.0364 | 9.98 | 18600 | 75.2859 | 3.7291 | 38.1052 | 56.2024 | 3821 | 54.7305 | 80.7951 | 2653 | 0.3425 | 0.3425 | 1168 | 58.8589 | 0.5893 | 0.9986 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Elkelouizajo/BERT_mnli_medium_1K
|
Elkelouizajo
| 2024-03-20T10:29:42Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:22:09Z |
---
license: apache-2.0
base_model: google-bert/bert-large-cased
tags:
- generated_from_trainer
model-index:
- name: results_bert_medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_bert_medium
This model is a fine-tuned version of [google-bert/bert-large-cased](https://huggingface.co/google-bert/bert-large-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 8446
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Kxn-6490/q-FrozenLake-v1-4x4-noSlippery
|
Kxn-6490
| 2024-03-20T10:23:18Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-20T10:23:14Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Kxn-6490/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlignmentResearch/robust_llm_pythia-spam-160m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T10:23:08Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m-deduped",
"base_model:finetune:EleutherAI/pythia-160m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:22:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-160m-deduped
model-index:
- name: robust_llm_pythia-spam-160m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-160m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-160m-deduped](https://huggingface.co/EleutherAI/pythia-160m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
xxx777xxxASD/Susanoo-10.7B-bpw-6.5
|
xxx777xxxASD
| 2024-03-20T10:22:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"roleplay",
"conversational",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T08:46:10Z |
---
license: cc-by-4.0
base_model:
- localfultonextractor/Susanoo-10.7B
library_name: transformers
tags:
- merge
- roleplay
- conversational
language:
- en
---

Exl2 BPW 6.5 quant of [LocalFultonExtractor's](https://huggingface.co/localfultonextractor) [Susanoo-10.7B](https://huggingface.co/localfultonextractor/Susanoo-10.7B)
(Fits in 12GB VRAM/32k context/4-bit cache)
|
pepijn223/q-FrozenLake-v1-4x4-noSlippery
|
pepijn223
| 2024-03-20T10:20:24Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-20T10:20:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pepijn223/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
zrvicc/Reinforce-CartPole-v1
|
zrvicc
| 2024-03-20T10:20:08Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-16T10:13:59Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
kertob/outputs
|
kertob
| 2024-03-20T10:19:55Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-20T09:21:44Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 250
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Sreenamol/BERTModified-rawbert-finetuned-wikitext-test
|
Sreenamol
| 2024-03-20T10:17:47Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-03-20T05:58:37Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: BERTModified-rawbert-finetuned-wikitext-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTModified-rawbert-finetuned-wikitext-test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 20.8186
- Precision: 0.0476
- Recall: 0.0476
- F1: 0.0476
- Accuracy: 0.0476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 21.0846 | 1.0 | 25 | 20.9953 | 0.0114 | 0.0114 | 0.0114 | 0.0114 |
| 17.8286 | 2.0 | 50 | 20.7823 | 0.0114 | 0.0114 | 0.0114 | 0.0114 |
| 15.1916 | 3.0 | 75 | 20.7021 | 0.0171 | 0.0171 | 0.0171 | 0.0171 |
| 12.7015 | 4.0 | 100 | 20.6023 | 0.0248 | 0.0248 | 0.0248 | 0.0248 |
| 10.852 | 5.0 | 125 | 20.5528 | 0.0324 | 0.0324 | 0.0324 | 0.0324 |
| 9.2624 | 6.0 | 150 | 20.5556 | 0.0324 | 0.0324 | 0.0324 | 0.0324 |
| 7.8348 | 7.0 | 175 | 20.5343 | 0.0343 | 0.0343 | 0.0343 | 0.0343 |
| 6.762 | 8.0 | 200 | 20.5861 | 0.0381 | 0.0381 | 0.0381 | 0.0381 |
| 5.8667 | 9.0 | 225 | 20.6005 | 0.0381 | 0.0381 | 0.0381 | 0.0381 |
| 5.184 | 10.0 | 250 | 20.6594 | 0.0438 | 0.0438 | 0.0438 | 0.0438 |
| 4.4605 | 11.0 | 275 | 20.6880 | 0.0457 | 0.0457 | 0.0457 | 0.0457 |
| 4.106 | 12.0 | 300 | 20.7090 | 0.0457 | 0.0457 | 0.0457 | 0.0457 |
| 3.622 | 13.0 | 325 | 20.7341 | 0.0457 | 0.0457 | 0.0457 | 0.0457 |
| 3.3097 | 14.0 | 350 | 20.7556 | 0.0476 | 0.0476 | 0.0476 | 0.0476 |
| 3.0423 | 15.0 | 375 | 20.8040 | 0.0495 | 0.0495 | 0.0495 | 0.0495 |
| 2.8348 | 16.0 | 400 | 20.8144 | 0.0533 | 0.0533 | 0.0533 | 0.0533 |
| 2.6718 | 17.0 | 425 | 20.8144 | 0.0495 | 0.0495 | 0.0495 | 0.0495 |
| 2.5584 | 18.0 | 450 | 20.8312 | 0.0533 | 0.0533 | 0.0533 | 0.0533 |
| 2.4502 | 19.0 | 475 | 20.8228 | 0.0514 | 0.0514 | 0.0514 | 0.0514 |
| 2.4219 | 20.0 | 500 | 20.8194 | 0.0514 | 0.0514 | 0.0514 | 0.0514 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kishanbodybrain/llama2-qlora-finetunined-french
|
kishanbodybrain
| 2024-03-20T10:15:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T09:32:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sumoz/OpenHathi-7B-Hi-v0.1-adapter
|
sumoz
| 2024-03-20T10:14:40Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:sarvamai/OpenHathi-7B-Hi-v0.1-Base",
"base_model:adapter:sarvamai/OpenHathi-7B-Hi-v0.1-Base",
"region:us"
] | null | 2024-03-20T10:13:40Z |
---
library_name: peft
base_model: sarvamai/OpenHathi-7B-Hi-v0.1-Base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0
|
Gordon119/TD-openai-whisper-large-v2-reproduce-epoch2-total5epoch
|
Gordon119
| 2024-03-20T10:13:51Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-16T19:12:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FlavioBF/convnext-tiny-224-finetuned-eurosat-albumentations
|
FlavioBF
| 2024-03-20T10:10:39Z | 194 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-20T10:03:41Z |
---
license: apache-2.0
base_model: facebook/convnext-tiny-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-eurosat-albumentations
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9544444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-eurosat-albumentations
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2548
- Accuracy: 0.9544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.269 | 1.0 | 190 | 0.2548 | 0.9544 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Infinimol/miiqu-gguf
|
Infinimol
| 2024-03-20T10:08:36Z | 0 | 7 |
transformers
|
[
"transformers",
"merge",
"en",
"de",
"fr",
"es",
"it",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-15T14:39:11Z |
---
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- merge
license: other
---
# miiqu-105b-v1.0
Developed by [Infinimol AI GmbH](https://www.infinimol.com/)
Also Available:
- EXL2: [5.0bpw](https://huggingface.co/Infinimol/miiqu-exl2)
- F16: [HF](https://huggingface.co/Infinimol/miiqu-f16)
8th place on [EQ-Bench](https://eqbench.com/), beating Qwen1.5-72B-Chat, miqudev/miqu-1-70b, mistral-medium and claude-3-sonnet-20240229. All without fine-tuning or additional training.
Thanks for support from: [turboderp](https://github.com/turboderp), [silphendio](https://github.com/silphendio), [sqrkl](https://github.com/sqrkl), and [ngxson](https://github.com/ngxson)!
### ❗ Q4_K_M files are split and require joining
**Note:** HF does not support uploading files larger than 50GB. The Q4_K_M files are supplied as split files.
<details><summary>Click for instructions regarding Q4_K_M files</summary>
#### Process
Please download:
- `miiqu.gguf-split-aa`
- `miiqu.gguf-split-ab`
- `miiqu.gguf-split-ac`
- `miiqu.gguf-split-ad`
- `miiqu.gguf-split-ae`
- `miiqu.gguf-split-af`
To join the files, do the following:
Linux and macOS:
```sh
cat miiqu.gguf-split-a* > miiqu_Q4_K_M.gguf && rm miiqu.gguf-split-a*
```
Windows command line:
```cmd
COPY /B miiqu.gguf-split-aa + miiqu.gguf-split-ab + miiqu.gguf-split-ac + miiqu.gguf-split-ad + miiqu.gguf-split-ae + miiqu.gguf-split-af miiqu_Q4_K_M.gguf
DEL miiqu.gguf-split-aa miiqu.gguf-split-ab miiqu.gguf-split-ac miiqu.gguf-split-ad miiqu.gguf-split-ae miiqu.gguf-split-af
```
</details>
## Model Details
- Max Context: 32768 tokens
- Layers: 105
### Prompt template: ChatML or Mistral
chatml:
```
<|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n
```
mistral:
```
[INST] <|user|><|user-message|>[/INST]<|bot|><|bot-message|></s>
```
|
Infinimol/miiqu-f16
|
Infinimol
| 2024-03-20T10:08:21Z | 56 | 11 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"de",
"fr",
"es",
"it",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-16T10:44:06Z |
---
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- merge
license: other
---
# miiqu-105b-v1.0
Developed by [Infinimol AI GmbH](https://www.infinimol.com/)
Also Available:
- GGUF: [Q4_K_M](https://huggingface.co/Infinimol/miiqu-gguf)
- EXL2: [5.0bpw](https://huggingface.co/Infinimol/miiqu-exl2)
8th place on [EQ-Bench](https://eqbench.com/), beating Qwen1.5-72B-Chat, miqudev/miqu-1-70b, mistral-medium and claude-3-sonnet-20240229. All without fine-tuning or additional training.
Thanks for support from: [turboderp](https://github.com/turboderp), [silphendio](https://github.com/silphendio), [sqrkl](https://github.com/sqrkl), and [ngxson](https://github.com/ngxson)!
## Model Details
- Max Context: 32768 tokens
- Layers: 105
### Prompt template: ChatML or Mistral
chatml:
```
<|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n
```
mistral:
```
[INST] <|user|><|user-message|>[/INST]<|bot|><|bot-message|></s>
```
|
girtcius/gemma-2b-dante-lora
|
girtcius
| 2024-03-20T10:06:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T10:06:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
1TuanPham/T-Llama-v1.1
|
1TuanPham
| 2024-03-20T10:06:15Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"vi",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-22T04:17:29Z |
---
license: apache-2.0
language:
- vi
- en
---
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Tuan Pham (FPTU HCM Student)
- **Model type:** Llama2-7B Decoder-only
- **Finetuned from model :**
* meta-llama/Llama-2-7b
* bkai-foundation-models/vietnamese-llama2-7b-120GB
* yeen214/llama2_7b_merge_orcafamily.
- **Bilingual support :** English and Vietnamese
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:**
* Training: https://github.com/vTuanpham/Vietnamese_QA_System
* Data: https://github.com/vTuanpham/Large_dataset_translator
- **Paper:** ...
- **Demo:** ...
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Prompt template
```
[SYSTEM_PROMPT]
####### Instruction:
[INPUT]
%%%%%%% Response:
[RESPONSE]
```
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from torch.cuda.amp import autocast
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
model_name = "1TuanPham/T-Llama-v1.1"
model = AutoModelForCausalLM.from_pretrained(model_name,
torch_dtype=torch.bfloat16,
use_cache=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)
pipe = pipeline("text-generation", model=base_model, tokenizer=tokenizer, streamer=streamer)
with autocast():
output_default = pipe("Phạm Nhật Vượng là ", pad_token_id=50256, max_new_tokens=128)
```
## Training Details
**Hardware Type:**
* GPU: VGA NVIDIA Tesla P100 16GB
* SYSTEM RAM: 29GB
**Hours used:** ~42.5 Approx*
### Training Data
* BactrianX
* OpenOrca_translated
* WizardLM_70k_translated
* TigerLabMathInstruct_translated_vi
* GradeSchoolMathInstruct_translated
* vilm_lima-vi
* MTEngVietnamese
* databricks_dolly15k_translated
* AlpacaCleaned_translated
* databricks_dolly15k
* OpenOrca
* GradeSchoolMathInstruct
* AlpacaCleaned
* WebglmQA
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
* Learning rate: 2e-5 cosine
* Optimizer: PagedLion8bit
* QLora: rank: 64 /Q: 4-bit
- 250k examples of 70% Vietnamese 30% English for 3.37 epoch
- 350k examples of 60% Vietnamese 40% English for 1.1 epoch
### Training loss

## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Results
[More Information Needed]
## Technical Specifications
### Model Architecture and Objective
[More Information Needed]
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
## Model Card Authors
## Model Card Contact
[More Information Needed]
|
rorschach-40/flan-t5-large-batch_1_2000_tak-text-classification
|
rorschach-40
| 2024-03-20T10:05:54Z | 51 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google/flan-t5-large",
"base_model:finetune:google/flan-t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T10:02:18Z |
---
license: apache-2.0
base_model: google/flan-t5-large
tags:
- generated_from_trainer
model-index:
- name: flan-t5-large-batch_1_2000_tak-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-batch_1_2000_tak-text-classification
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 26 | 0.5347 | 0.75 | 1.0 | 0.8571 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
maddi99/bon_mi_bn
|
maddi99
| 2024-03-20T10:05:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T10:01:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SpideyDLK/wav2vec2-large-xls-r-300m-sinhala-low-LR-part1
|
SpideyDLK
| 2024-03-20T10:03:12Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-19T07:19:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
humung/Ko-PlatYi-6B-vlending-cs-qkvo-v0.0.2
|
humung
| 2024-03-20T10:01:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T10:00:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-imdb-31m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T09:55:31Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"base_model:finetune:EleutherAI/pythia-31m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:55:24Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-31m
model-index:
- name: robust_llm_pythia-imdb-31m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-31m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
DatPySci/tiny-llama-sft
|
DatPySci
| 2024-03-20T09:55:05Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T09:39:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-imdb-31m-mz-ada-v3-s-1
|
AlignmentResearch
| 2024-03-20T09:53:48Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"base_model:finetune:EleutherAI/pythia-31m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:53:38Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-31m
model-index:
- name: robust_llm_pythia-imdb-31m-mz-ada-v3-s-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-31m-mz-ada-v3-s-1
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
peldrak/maskformer-base-ade-finetuned-coastTrain-grCoastline
|
peldrak
| 2024-03-20T09:50:20Z | 33 | 0 |
transformers
|
[
"transformers",
"safetensors",
"maskformer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-14T21:26:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shiaulteyr/Foxxxy-DialoGPT-large
|
shiaulteyr
| 2024-03-20T09:49:41Z | 130 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T09:49:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ighoshsubho/mistral-7b-stepback-prompt-unsloth
|
ighoshsubho
| 2024-03-20T09:45:41Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T09:38:47Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** ighoshsubho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Praveenna/rl_course_vizdoom_health_gathering_supreme
|
Praveenna
| 2024-03-20T09:45:20Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-20T09:45:14Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.72 +/- 2.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r Praveenna/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
j78/thriller-books-xyz
|
j78
| 2024-03-20T09:42:15Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-20T09:38:14Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Thriller-books-xyz Dreambooth model trained by j78 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
|
rorschach-40/home-batch_9_5000-text-classification
|
rorschach-40
| 2024-03-20T09:37:41Z | 50 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:35:44Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_9_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_9_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3602
- Precision: 0.9444
- Recall: 0.9379
- F1: 0.9412
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 67 | 0.2118 | 0.9329 | 0.9586 | 0.9456 |
| 0.2691 | 2.0 | 134 | 0.3265 | 0.9444 | 0.9379 | 0.9412 |
| 0.09 | 3.0 | 201 | 0.3602 | 0.9444 | 0.9379 | 0.9412 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
stablediffusionapi/juggernautv9-xl
|
stablediffusionapi
| 2024-03-20T09:36:08Z | 39 | 1 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-03-20T09:33:04Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "juggernautv9-xl"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/juggernautv9-xl)
Model link: [View model](https://modelslab.com/models/juggernautv9-xl)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "juggernautv9-xl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
urkidi/Taxi-v.0.0.0
|
urkidi
| 2024-03-20T09:33:22Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-20T09:33:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v.0.0.0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="urkidi/Taxi-v.0.0.0", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Adriatogi/segformer-b1-finetuned-segments-graffiti
|
Adriatogi
| 2024-03-20T09:32:26Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b1",
"base_model:finetune:nvidia/mit-b1",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-03-20T09:23:20Z |
---
license: other
base_model: nvidia/mit-b1
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b1-finetuned-segments-graffiti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-finetuned-segments-graffiti
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the Adriatogi/graffiti dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2171
- Mean Iou: 0.8381
- Mean Accuracy: 0.9102
- Overall Accuracy: 0.9168
- Accuracy Not Graf: 0.9379
- Accuracy Graf: 0.8826
- Iou Not Graf: 0.8748
- Iou Graf: 0.8015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Not Graf | Accuracy Graf | Iou Not Graf | Iou Graf |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-----------------:|:-------------:|:------------:|:--------:|
| 0.4076 | 0.42 | 20 | 0.5389 | 0.6053 | 0.7982 | 0.7541 | 0.6139 | 0.9825 | 0.6073 | 0.6033 |
| 0.3386 | 0.83 | 40 | 0.2883 | 0.7962 | 0.8984 | 0.8898 | 0.8625 | 0.9343 | 0.8290 | 0.7634 |
| 0.1964 | 1.25 | 60 | 0.2514 | 0.8061 | 0.9009 | 0.8964 | 0.8819 | 0.9200 | 0.8406 | 0.7716 |
| 0.1723 | 1.67 | 80 | 0.2259 | 0.8269 | 0.9058 | 0.9100 | 0.9235 | 0.8880 | 0.8641 | 0.7898 |
| 0.1981 | 2.08 | 100 | 0.2338 | 0.8119 | 0.9040 | 0.8999 | 0.8869 | 0.9210 | 0.8459 | 0.7778 |
| 0.2827 | 2.5 | 120 | 0.2106 | 0.8251 | 0.9080 | 0.9084 | 0.9095 | 0.9066 | 0.8601 | 0.7902 |
| 0.1864 | 2.92 | 140 | 0.2241 | 0.8232 | 0.8956 | 0.9097 | 0.9546 | 0.8365 | 0.8675 | 0.7790 |
| 0.1362 | 3.33 | 160 | 0.2185 | 0.8257 | 0.8978 | 0.9109 | 0.9525 | 0.8431 | 0.8688 | 0.7826 |
| 0.1264 | 3.75 | 180 | 0.2155 | 0.8237 | 0.9054 | 0.9079 | 0.9156 | 0.8952 | 0.8602 | 0.7871 |
| 0.1688 | 4.17 | 200 | 0.2241 | 0.8206 | 0.8985 | 0.9072 | 0.9346 | 0.8625 | 0.8618 | 0.7795 |
| 0.1198 | 4.58 | 220 | 0.2080 | 0.8331 | 0.9087 | 0.9137 | 0.9296 | 0.8877 | 0.8697 | 0.7965 |
| 0.111 | 5.0 | 240 | 0.2033 | 0.8369 | 0.9133 | 0.9154 | 0.9221 | 0.9044 | 0.8710 | 0.8027 |
| 0.2003 | 5.42 | 260 | 0.2214 | 0.8262 | 0.9118 | 0.9084 | 0.8976 | 0.9261 | 0.8586 | 0.7938 |
| 0.1369 | 5.83 | 280 | 0.2044 | 0.8396 | 0.9147 | 0.9170 | 0.9245 | 0.9048 | 0.8734 | 0.8058 |
| 0.1901 | 6.25 | 300 | 0.1968 | 0.8411 | 0.9119 | 0.9185 | 0.9393 | 0.8846 | 0.8771 | 0.8050 |
| 0.1887 | 6.67 | 320 | 0.2098 | 0.8367 | 0.9100 | 0.9159 | 0.9344 | 0.8857 | 0.8731 | 0.8002 |
| 0.0738 | 7.08 | 340 | 0.2205 | 0.8357 | 0.9127 | 0.9147 | 0.9211 | 0.9043 | 0.8699 | 0.8014 |
| 0.1166 | 7.5 | 360 | 0.2274 | 0.8317 | 0.9046 | 0.9135 | 0.9420 | 0.8672 | 0.8709 | 0.7924 |
| 0.1247 | 7.92 | 380 | 0.2225 | 0.8310 | 0.9051 | 0.9130 | 0.9381 | 0.8722 | 0.8698 | 0.7923 |
| 0.1212 | 8.33 | 400 | 0.2230 | 0.8345 | 0.9108 | 0.9143 | 0.9254 | 0.8961 | 0.8699 | 0.7991 |
| 0.0979 | 8.75 | 420 | 0.2226 | 0.8352 | 0.9076 | 0.9153 | 0.9400 | 0.8752 | 0.8730 | 0.7973 |
| 0.0984 | 9.17 | 440 | 0.2189 | 0.8354 | 0.9106 | 0.9149 | 0.9287 | 0.8925 | 0.8712 | 0.7997 |
| 0.1151 | 9.58 | 460 | 0.2185 | 0.8382 | 0.9098 | 0.9170 | 0.9396 | 0.8800 | 0.8751 | 0.8013 |
| 0.0989 | 10.0 | 480 | 0.2171 | 0.8381 | 0.9102 | 0.9168 | 0.9379 | 0.8826 | 0.8748 | 0.8015 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rorschach-40/home-batch_8_5000-text-classification
|
rorschach-40
| 2024-03-20T09:32:14Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:30:22Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_8_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_8_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4705
- Precision: 0.9097
- Recall: 0.9658
- F1: 0.9369
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 67 | 0.2655 | 0.9371 | 0.9178 | 0.9273 |
| 0.2091 | 2.0 | 134 | 0.4453 | 0.9038 | 0.9658 | 0.9338 |
| 0.0974 | 3.0 | 201 | 0.4705 | 0.9097 | 0.9658 | 0.9369 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
musiclang/musiclang-v2
|
musiclang
| 2024-03-20T09:29:12Z | 0 | 62 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-02-19T16:58:29Z |
---
library_name: transformers
tags: []
---
MusicLang : Controllable Symbolic Music Generation
========================================================

🎶 <b> You want to generate music that you can export to your favourite DAW in MIDI ?</b>
🎛️ <b> You want to control the chord progression of the generated music ? </b>
🚀 <b> You need to run it fast on your laptop without a gpu ?</b>
Here is MusicLang Predict, your controllable music copilot.
I just want to try !
--------------------
[](https://colab.research.google.com/drive/1MA2mek826c05BjbWk2nRkVv2rW7kIU_S?usp=sharing)
Go to our Colab, we have a lot of cool examples. From generating creative musical ideas to continuing a song with a specified chord progression.
I am more serious about it
--------------------------
Install the musiclang-predict package :
```bash
pip install musiclang_predict
```
Then open your favourite notebook and start generating music in a few lines :
```python
from musiclang_predict import MusicLangPredictor
nb_tokens = 1024
temperature = 0.9 # Don't go over 1.0, at your own risks !
top_p = 1.0 # <=1.0, Usually 1 best to get not too much repetitive music
seed = 16 # change here to change result, or set to 0 to unset seed
ml = MusicLangPredictor('musiclang/musiclang-v2') # Only available model for now
score = ml.predict(
nb_tokens=nb_tokens, # 1024 tokens ~ 25s of music (depending of the number of instruments generated)
temperature=temperature,
topp=top_p,
rng_seed=seed # change here to change result, or set to 0 to unset seed
)
score.to_midi('test.mid') # Open that file in your favourite DAW, score editor or even in VLC
```
You were talking about controlling the chord progression ?
----------------------------------------------------------
You had a specific harmony in mind am I right ?
That's why we allow a fine control over the chord progression of the generated music.
Just specify it as a string like below, choose a time signature and let the magic happen.
```python
from musiclang_predict import MusicLangPredictor
# Control the chord progression
# Chord qualities available : M, m, 7, m7b5, sus2, sus4, m7, M7, dim, dim0.
# You can also specify the bass if it belongs to the chord (eg : Bm/D)
chord_progression = "Am CM Dm E7 Am" # 1 chord = 1 bar
time_signature = (4, 4) # 4/4 time signature, don't be too crazy here
nb_tokens = 1024
temperature = 0.8
top_p = 1.0
seed = 42
ml = MusicLangPredictor('musiclang/musiclang-v2')
score = ml.predict_chords(
chord_progression,
time_signature=time_signature,
temperature=temperature,
topp=top_p,
rng_seed=seed # set to 0 to unset seed
)
score.to_midi('test.mid', tempo=120, time_signature=(4, 4))
```
Disclaimer : The chord progression is not guaranteed to be exactly the same as the one you specified. It's a generative model after all.
Usually it will happen when you use an exotic chord progression and if you set a high temperature.
That's cool but I have my music to plug in ...
------------------------------------------------
Don't worry, we got you covered. You can use your music as a template to generate new music.
Let's continue some Bach music with a chord progression he could have used :
```python
from musiclang_predict import MusicLangPredictor
from musiclang_predict import corpus
song_name = 'bach_847' # corpus.list_corpus() to get the list of available songs
chord_progression = "Cm C7/E Fm F#dim G7 Cm"
nb_tokens = 1024
temperature = 0.8
top_p = 1.0
seed = 3666
ml = MusicLangPredictor('musiclang/musiclang-v2')
score = ml.predict_chords(
chord_progression,
score=corpus.get_midi_path_from_corpus(song_name),
time_signature=(4, 4),
nb_tokens=1024,
prompt_chord_range=(0,4),
temperature=temperature,
topp=top_p,
rng_seed=seed # set to 0 to unset seed
)
score.to_midi('test.mid', tempo=110, time_signature=(4, 4))
```
What's coming next ?
---------------------
We are working on a lot of cool features, some are already encoded in the model :
- A control over the instruments used in each bar and their properties (note density, pitch range, average velocity)
- Some performances improvements over the inference C script
- A faster distilled model for real-time generation that can be embedded in plugins or mobile applications
- An integration into a DAW as a plugin
- Some specialized smaller models depending on our user's needs
How does that work ?
---------------------
If you want to learn more about how we are moving toward symbolic music generation, go to our [technical blog](https://musiclang.github.io/).
The tokenization, the model are described in great details.
We are using a LLAMA2 architecture (many thanks to Andrej Karpathy awesome [llama2.c](https://github.com/karpathy/llama2.c)), trained on a large dataset of midi files (The CC0 licensed [LAKH](https://colinraffel.com/projects/lmd/)).
We heavily rely on preprocessing the midi files to get an enriched tokenization that describe chords & scale for each bar.
The is also helpful for normalizing melodies relative to the current chord/scale.
Contributing & Contact us
-------------------------
We are looking for contributors to help us improve the model, the tokenization, the performances and the documentation.
If you are interested in this project, open an issue, a pull request, or even [contact us directly](https://www.musiclang.io/contact).
License
-------
Specific licenses applies to our models. If you would like to use the model in your product, please
[contact us](https://www.musiclang.io/contact). We are looking forward to hearing from you !
MusicLang Predict is licensed under the GPL-3.0 License.
The MusicLang base language package on which the model rely ([musiclang package](https://github.com/musiclang/musiclang)) is licensed under the BSD 3-Clause License.
|
EverDarling/deberta-v3-base
|
EverDarling
| 2024-03-20T09:25:48Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-18T11:43:18Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: deberta-v3-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0216
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.9984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 1.1693 | 1.0 | 680 | 0.1284 | 0.0 | 0.0 | 0.0 | 0.9978 |
| 0.1244 | 2.0 | 1361 | 0.0289 | 0.0 | 0.0 | 0.0 | 0.9984 |
| 0.0213 | 3.0 | 2040 | 0.0216 | 0.0 | 0.0 | 0.0 | 0.9984 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
210010020-iitdh/mistral_try
|
210010020-iitdh
| 2024-03-20T09:23:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T09:22:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rorschach-40/home-batch_7_5000-text-classification
|
rorschach-40
| 2024-03-20T09:22:28Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:21:16Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_7_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_7_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3458
- Precision: 0.9065
- Recall: 0.9238
- F1: 0.9151
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 53 | 0.3469 | 0.9009 | 0.9524 | 0.9259 |
| 0.2795 | 2.0 | 106 | 0.3458 | 0.9065 | 0.9238 | 0.9151 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Berlinbenilo/code_llama_fin
|
Berlinbenilo
| 2024-03-20T09:21:59Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-03-20T09:01:20Z |
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.8.2## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float32
|
nandieswar/phi2-nlp-to-sql
|
nandieswar
| 2024-03-20T09:17:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T09:17:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-imdb-14m-mz-ada-v3-s-1
|
AlignmentResearch
| 2024-03-20T09:17:01Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:16:56Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-imdb-14m-mz-ada-v3-s-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-14m-mz-ada-v3-s-1
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-imdb-14m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T09:16:59Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:16:54Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_pythia-imdb-14m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-14m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
linoyts/linoy_lora_v4
|
linoyts
| 2024-03-20T09:15:21Z | 6 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-20T08:37:53Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a <s0><s1> emoji dressed as yoda'
output:
url:
"image_0.png"
- text: 'a <s0><s1> emoji dressed as yoda'
output:
url:
"image_1.png"
- text: 'a <s0><s1> emoji dressed as yoda'
output:
url:
"image_2.png"
- text: 'a <s0><s1> emoji dressed as yoda'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a <s0><s1> emoji
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/linoy_lora_v4
<Gallery />
## Model description
### These are linoyts/linoy_lora_v4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`linoy_lora_v4.safetensors` here 💾](/linoyts/linoy_lora_v4/blob/main/linoy_lora_v4.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:linoy_lora_v4:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`linoy_lora_v4_emb.safetensors` here 💾](/linoyts/linoy_lora_v4/blob/main/linoy_lora_v4_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `linoy_lora_v4_emb` to your prompt. For example, `a linoy_lora_v4_emb emoji`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/linoy_lora_v4', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/linoy_lora_v4', filename='linoy_lora_v4_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('a <s0><s1> emoji dressed as yoda').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/linoy_lora_v4/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Afterglow777/chemical_dpo_2
|
Afterglow777
| 2024-03-20T09:15:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T09:07:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-spam-70m-mz-ada-v3-s-1
|
AlignmentResearch
| 2024-03-20T09:09:15Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:finetune:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:08:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-70m-deduped
model-index:
- name: robust_llm_pythia-spam-70m-mz-ada-v3-s-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-70m-mz-ada-v3-s-1
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
210010020-iitdh/mistral_taylor
|
210010020-iitdh
| 2024-03-20T09:05:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T09:04:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlignmentResearch/robust_llm_pythia-spam-70m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T09:04:40Z | 109 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m-deduped",
"base_model:finetune:EleutherAI/pythia-70m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:04:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-70m-deduped
model-index:
- name: robust_llm_pythia-spam-70m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-70m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Buam/my-pet-cat
|
Buam
| 2024-03-20T09:04:39Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-20T09:00:40Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-CAT Dreambooth model trained by Buam following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2328048
Sample pictures of this concept:

|
rorschach-40/home-batch_4_5000-text-classification
|
rorschach-40
| 2024-03-20T09:03:16Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T09:02:02Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_4_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_4_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4101
- Precision: 0.8819
- Recall: 0.9333
- F1: 0.9069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 61 | 0.3941 | 0.8633 | 1.0 | 0.9266 |
| 0.3531 | 2.0 | 122 | 0.4101 | 0.8819 | 0.9333 | 0.9069 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Simonlob/TTS_Akyl-AI_alpha
|
Simonlob
| 2024-03-20T09:01:32Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"mms",
"text-to-speech",
"ky",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-03-18T12:54:53Z |
---
license: cc-by-nc-4.0
inference: true
tags:
- mms
- vits
pipeline_tag: text-to-speech
language:
- ky
---
# Introduction
This repository contains a text-to-speech (TTS) model fine-tuned on data consisting of sentences in the Kyrgyz language with audio examples voiced by a single speaker. The audio is provided at a sample rate of 16 kHz. The dataset comprises 5000 examples and 7 hours of audio. The model is based on the facebook/mms-tts-kir model pre-trained on the Kyrgyz language. The code for fine-tuning the model was based on the code from this [GitHub repository](https://github.com/ylacombe/finetune-hf-vits). Experimental findings concluded that the best results are achieved through two-stage fine-tuning:
* Training with Learning Rate 1e-4 and 4 epochs,
* Training with Learning Rate 5e-7 and 80 epochs.
# MMS: Scaling Speech Technology to 1000+ languages
The Massively Multilingual Speech (MMS) project expands speech technology from about 100 languages to over 1,000 by building a single multilingual speech recognition model supporting over 1,100 languages (more than 10 times as many as before), language identification models able to identify over [4,000 languages](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html) (40 times more than before), pretrained models supporting over 1,400 languages, and text-to-speech models for over 1,100 languages. Our goal is to make it easier for people to access information and to use devices in their preferred language.
You can find details in the paper [Scaling Speech Technology to 1000+ languages](https://research.facebook.com/publications/scaling-speech-technology-to-1000-languages/) and the [blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/).
An overview of the languages covered by MMS can be found [here](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Transformers
MMS has been added to Transformers. For more information, please refer to [Transformers' MMS docs](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
[Click here](https://huggingface.co/models?other=mms) to find all MMS checkpoints on the Hub.
Checkout the demo here [](https://huggingface.co/spaces/facebook/MMS)
##
# Inference
The model takes Cyrillic text in the Kyrgyz language as input and preprocesses it by removing punctuation marks (periods, commas, colons, exclamation and question marks) as well as words written in Latin script. Therefore, it is not advisable to feed multiple sentences into the model at once as they will be vocalized without intonational pauses, indicating the end of one and the beginning of a new sentence. Words written in Latin script will be skipped in the generated speech.
For example:
```
text = 'Кандай улут болбосун кыргызча жооп кайтарышыбыз керек.'
```
You can use this model by executing the code provided below.
```
import subprocess
from transformers import pipeline
from IPython.display import Audio
import numpy as np
import torch
import scipy
model_id = "Simonlob/simonlob_akylay"
synthesiser = pipeline("text-to-speech", model_id) # add device=0 if you want to use a GPU
```
```
text = 'Кандай улут болбосун кыргызча жооп кайтарышыбыз керек.'
speech = synthesiser(text)
```
The output of the model looks as follows:
```
{'audio': array([[-1.7045566e-04, 8.9107212e-05, 2.8329418e-04, ...,
8.0898666e-08, 4.8763245e-06, 5.4663483e-06]], dtype=float32),
'sampling_rate': 16000}
```
Listen to the result:
```
Audio(speech['audio'], rate=speech['sampling_rate'])
```
Save the audio as a file:
```
scipy.io.wavfile.write("<OUTPUT PATH>.wav", rate=speech["sampling_rate"], data=speech["audio"][0])
```
</details>
## Model details
- **Model type:** Text-to-speech model
- **License:** CC-BY-NC 4.0 license
- **Cite as:**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
## Credits
- Facebook AI Research ([Official Space](https://huggingface.co/spaces/facebook/MMS))
- Yoach Lacombe (Research) [GitHub](https://github.com/ylacombe/finetune-hf-vits)
- The Cramer Project (Data collection and preprocessing)[Official Space](https://thecramer.com/), [Akyl_AI](https://github.com/Akyl-AI)
- Amantur Amatov (Expert)
- Timur Turatali (Expert, Research) [GitHub](https://github.com/golden-ratio)
- Den Pavlov (Research, Data preprocessing and fine-tuning) [GitHub](https://github.com/simonlobgromov/finetune-hf-vits)
- Ulan Abdurazakov (Environment Developer)
- Nursultan Bakashov (CEO)
## Additional Links
- [Blog post](https://ai.facebook.com/blog/multilingual-model-speech-recognition/)
- [Transformers documentation](https://huggingface.co/docs/transformers/main/en/model_doc/mms).
- [Paper](https://arxiv.org/abs/2305.13516)
- [GitHub Repository for fine tuning](https://github.com/ylacombe/finetune-hf-vits)
- [GitHub Repository](https://github.com/facebookresearch/fairseq/tree/main/examples/mms#asr)
- [Other **MMS** checkpoints](https://huggingface.co/models?other=mms)
- MMS base checkpoints:
- [facebook/mms-1b](https://huggingface.co/facebook/mms-1b)
- [facebook/mms-300m](https://huggingface.co/facebook/mms-300m)
|
rorschach-40/home-batch_3_5000-text-classification
|
rorschach-40
| 2024-03-20T08:59:46Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T08:58:30Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_3_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_3_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5097
- Precision: 0.8462
- Recall: 0.8609
- F1: 0.8534
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 64 | 0.4766 | 0.7852 | 0.9217 | 0.848 |
| 0.3676 | 2.0 | 128 | 0.5097 | 0.8462 | 0.8609 | 0.8534 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_pythia-spam-31m-mz-ada-v3-s-2
|
AlignmentResearch
| 2024-03-20T08:59:09Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-31m",
"base_model:finetune:EleutherAI/pythia-31m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T08:59:03Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-31m
model-index:
- name: robust_llm_pythia-spam-31m-mz-ada-v3-s-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-spam-31m-mz-ada-v3-s-2
This model is a fine-tuned version of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Hemg/Birdsclassification
|
Hemg
| 2024-03-20T08:56:55Z | 194 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-19T11:52:54Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Birdsclassification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Birdsclassification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3057
- Accuracy: 0.9307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 5.42 | 1.0 | 262 | 3.6698 | 0.7571 |
| 1.7968 | 2.0 | 525 | 0.9179 | 0.8396 |
| 0.6598 | 3.0 | 787 | 0.6370 | 0.8654 |
| 0.4867 | 4.0 | 1050 | 0.5493 | 0.8765 |
| 0.4055 | 5.0 | 1312 | 0.5093 | 0.8833 |
| 0.3513 | 6.0 | 1575 | 0.4602 | 0.8892 |
| 0.3053 | 7.0 | 1837 | 0.4350 | 0.8977 |
| 0.2692 | 8.0 | 2100 | 0.4130 | 0.9021 |
| 0.2446 | 9.0 | 2362 | 0.4218 | 0.9018 |
| 0.2267 | 10.0 | 2625 | 0.3667 | 0.9130 |
| 0.2018 | 11.0 | 2887 | 0.3632 | 0.9154 |
| 0.1842 | 12.0 | 3150 | 0.3533 | 0.9154 |
| 0.1636 | 13.0 | 3412 | 0.3396 | 0.9206 |
| 0.1511 | 14.0 | 3675 | 0.3125 | 0.9266 |
| 0.1411 | 15.0 | 3937 | 0.2833 | 0.9329 |
| 0.1259 | 15.97 | 4192 | 0.3057 | 0.9307 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
rorschach-40/home-batch_2_5000-text-classification
|
rorschach-40
| 2024-03-20T08:54:06Z | 49 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T08:52:48Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: home-batch_2_5000-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# home-batch_2_5000-text-classification
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4627
- Precision: 0.8169
- Recall: 0.9431
- F1: 0.8755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 64 | 0.4787 | 0.7785 | 1.0 | 0.8754 |
| 0.4405 | 2.0 | 128 | 0.4627 | 0.8169 | 0.9431 | 0.8755 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
weny22/sum_model_lr1e_3_20epoch
|
weny22
| 2024-03-20T08:53:34Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:weny22/sum_model_t5_saved",
"base_model:finetune:weny22/sum_model_t5_saved",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-19T08:17:23Z |
---
base_model: weny22/sum_model_t5_saved
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: sum_model_lr1e_3_20epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sum_model_lr1e_3_20epoch
This model got the best result so far.
This model is a fine-tuned version of [weny22/sum_model_t5_saved](https://huggingface.co/weny22/sum_model_t5_saved) on the INF582-2023-24 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8879
- Rouge1: 0.2188
- Rouge2: 0.0915
- Rougel: 0.181
- Rougelsum: 0.1808
- Gen Len: 18.98
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 335 | 2.1280 | 0.196 | 0.0662 | 0.156 | 0.1559 | 18.988 |
| 2.8114 | 2.0 | 670 | 2.0104 | 0.2004 | 0.0724 | 0.1609 | 0.1609 | 18.956 |
| 2.2319 | 3.0 | 1005 | 1.9785 | 0.2082 | 0.0776 | 0.1681 | 0.1681 | 18.964 |
| 2.2319 | 4.0 | 1340 | 1.9377 | 0.2084 | 0.0831 | 0.1703 | 0.1704 | 18.9787 |
| 2.0444 | 5.0 | 1675 | 1.8873 | 0.2107 | 0.0836 | 0.1719 | 0.1722 | 18.9813 |
| 1.9359 | 6.0 | 2010 | 1.8945 | 0.2132 | 0.0848 | 0.1736 | 0.1735 | 18.9733 |
| 1.9359 | 7.0 | 2345 | 1.8949 | 0.2135 | 0.0843 | 0.1725 | 0.1727 | 18.9627 |
| 1.8292 | 8.0 | 2680 | 1.8741 | 0.2155 | 0.0869 | 0.1762 | 0.1765 | 18.9487 |
| 1.7623 | 9.0 | 3015 | 1.8679 | 0.2154 | 0.0873 | 0.176 | 0.1759 | 18.9767 |
| 1.7623 | 10.0 | 3350 | 1.8627 | 0.2171 | 0.0883 | 0.1774 | 0.1775 | 18.9833 |
| 1.6812 | 11.0 | 3685 | 1.8617 | 0.217 | 0.0877 | 0.176 | 0.1759 | 18.9827 |
| 1.6331 | 12.0 | 4020 | 1.8572 | 0.2154 | 0.088 | 0.1756 | 0.1757 | 18.982 |
| 1.6331 | 13.0 | 4355 | 1.8645 | 0.2175 | 0.0895 | 0.178 | 0.178 | 18.972 |
| 1.5737 | 14.0 | 4690 | 1.8707 | 0.2168 | 0.0877 | 0.1761 | 0.1761 | 18.978 |
| 1.5326 | 15.0 | 5025 | 1.8764 | 0.2204 | 0.09 | 0.1805 | 0.1804 | 18.9827 |
| 1.5326 | 16.0 | 5360 | 1.8746 | 0.2196 | 0.0916 | 0.1804 | 0.1804 | 18.9767 |
| 1.4881 | 17.0 | 5695 | 1.8734 | 0.2195 | 0.0924 | 0.1804 | 0.1806 | 18.9867 |
| 1.4631 | 18.0 | 6030 | 1.8869 | 0.219 | 0.091 | 0.1802 | 0.1802 | 18.972 |
| 1.4631 | 19.0 | 6365 | 1.8886 | 0.2201 | 0.092 | 0.1819 | 0.1819 | 18.9847 |
| 1.4345 | 20.0 | 6700 | 1.8879 | 0.2188 | 0.0915 | 0.181 | 0.1808 | 18.98 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ketansomewhere/sd-class2
|
Ketansomewhere
| 2024-03-20T08:49:23Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-03-20T08:49:14Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Ketansomewhere/sd-class2')
image = pipeline().images[0]
image
```
|
rishiai/gpt-2-finetuned
|
rishiai
| 2024-03-20T08:44:48Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T07:37:02Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
arvnoodle/hcl-codellama-instruct-13b-javascript-lotuscript
|
arvnoodle
| 2024-03-20T08:38:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-13b-Instruct-hf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-20T08:38:26Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: codellama/CodeLlama-13b-Instruct-hf
---
# Uploaded model
- **Developed by:** arvnoodle
- **License:** apache-2.0
- **Finetuned from model :** codellama/CodeLlama-13b-Instruct-hf
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ardaorcun/finetuned_cosmos1603
|
ardaorcun
| 2024-03-20T08:37:11Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-16T15:23:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shahid04/Modelsample1
|
Shahid04
| 2024-03-20T08:36:26Z | 161 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blenderbot",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-20T08:34:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lemon-mint/gemma-ko-it-v0.5
|
lemon-mint
| 2024-03-20T08:24:50Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gemma",
"text-generation",
"conversational",
"ko",
"en",
"dataset:maywell/koVast",
"dataset:beomi/KoAlpaca-v1.1a",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T03:38:35Z |
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
datasets:
- maywell/koVast
- beomi/KoAlpaca-v1.1a
language:
- ko
- en
widget:
- messages:
- role: user
content: 햄스터와 고양이의 차이점에 대해서 설명해줘.
inference:
parameters:
max_new_tokens: 256
---
[maywell/koVast](https://huggingface.co/datasets/maywell/koVast) 데이터셋을 사용한 Gemma 2B Instruct 한국어 파인튜닝 실험.
|
ThuyNT03/CS505-NerCSI-PhoBERT_v2
|
ThuyNT03
| 2024-03-20T08:24:35Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base-v2",
"base_model:finetune:vinai/phobert-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-20T08:00:51Z |
---
base_model: vinai/phobert-base-v2
tags:
- generated_from_trainer
model-index:
- name: CS505-NerCSI-PhoBERT_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505-NerCSI-PhoBERT_v2
This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 41
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 369 | 0.3216 |
| 0.4522 | 2.0 | 738 | 0.2479 |
| 0.2944 | 3.0 | 1107 | 0.2174 |
| 0.2944 | 4.0 | 1476 | 0.1441 |
| 0.2264 | 5.0 | 1845 | 0.1032 |
| 0.1526 | 6.0 | 2214 | 0.0730 |
| 0.1058 | 7.0 | 2583 | 0.0611 |
| 0.1058 | 8.0 | 2952 | 0.0415 |
| 0.0733 | 9.0 | 3321 | 0.0333 |
| 0.0459 | 10.0 | 3690 | 0.0295 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Owhslp/nous_researcher_tuning_4_3
|
Owhslp
| 2024-03-20T08:23:44Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T07:08:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Heng666/Taiwan_kapok_0.B_ckpt
|
Heng666
| 2024-03-20T08:22:42Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"Llama",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-20T07:16:31Z |
---
license: apache-2.0
language:
- zh
tags:
- Llama
pipeline_tag: text-generation
---
|
second-state/Llava-v1.6-Vicuna-7B-GGUF
|
second-state
| 2024-03-20T08:22:23Z | 131 | 2 |
transformers
|
[
"transformers",
"gguf",
"llava",
"text-generation",
"base_model:liuhaotian/llava-v1.6-vicuna-7b",
"base_model:quantized:liuhaotian/llava-v1.6-vicuna-7b",
"license:llama2",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-02-25T15:46:16Z |
---
base_model: liuhaotian/llava-v1.6-vicuna-7b
inference: false
library_name: transformers
license: llama2
model_creator: liuhaotian
model_name: Llava v1.6 Vicuna 7B
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llava-v1.6-Vicuna-7B-GGUF
## Original Model
[liuhaotian/llava-v1.6-vicuna-7b](https://huggingface.co/liuhaotian/llava-v1.6-vicuna-7b)
## Run with LlamaEdge
- LlamaEdge version: comming soon
- Prompt template
- Prompt type: `vicuna-llava`
- Prompt string
```text
<system_prompt>\nUSER:<image_embeddings>\n<textual_prompt>\nASSISTANT:
```
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:llava-v1.6-vicuna-7b-Q5_K_M.gguf llama-api-server.wasm -p vicuna-llava -c 4096 --llava-mmproj llava-v1.6-vicuna-7b-mmproj-model-f16.gguf -m llava-v1.6-vicuna-7b
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [llava-v1.6-vicuna-7b-Q2_K.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q2_K.gguf) | Q2_K | 2 | 2.53 GB| smallest, significant quality loss - not recommended for most purposes |
| [llava-v1.6-vicuna-7b-Q3_K_L.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_L.gguf) | Q3_K_L | 3 | 3.6 GB| small, substantial quality loss |
| [llava-v1.6-vicuna-7b-Q3_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_M.gguf) | Q3_K_M | 3 | 3.3 GB| very small, high quality loss |
| [llava-v1.6-vicuna-7b-Q3_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss |
| [llava-v1.6-vicuna-7b-Q4_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [llava-v1.6-vicuna-7b-Q4_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended |
| [llava-v1.6-vicuna-7b-Q4_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss |
| [llava-v1.6-vicuna-7b-Q5_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [llava-v1.6-vicuna-7b-Q5_K_M.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended |
| [llava-v1.6-vicuna-7b-Q5_K_S.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended |
| [llava-v1.6-vicuna-7b-Q6_K.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss |
| [llava-v1.6-vicuna-7b-Q8_0.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended |
| [llava-v1.6-vicuna-7b-mmproj-model-f16.gguf](https://huggingface.co/second-state/Llava-v1.6-Vicuna-7B-GGUF/blob/main/llava-v1.6-vicuna-7b-mmproj-model-f16.gguf) | f16 | 8 | 624 MB| |
*Quantized with llama.cpp b2230*
|
adamjweintraut/bart-finetuned-lyrlen-128-special_tokens
|
adamjweintraut
| 2024-03-20T08:19:56Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-20T03:45:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: facebook/bart-large
model-index:
- name: bart-finetuned-lyrlen-128-special_tokens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-lyrlen-128-special_tokens
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2828 | 0.33 | 500 | 3.0015 |
| 3.0513 | 0.67 | 1000 | 2.9361 |
| 2.9573 | 1.0 | 1500 | 2.9111 |
| 2.8841 | 1.33 | 2000 | 2.9007 |
| 2.8352 | 1.67 | 2500 | 2.9764 |
| 2.7897 | 2.0 | 3000 | 2.9606 |
| 2.7511 | 2.33 | 3500 | 2.9490 |
| 2.7284 | 2.67 | 4000 | 2.9458 |
| 2.7167 | 3.0 | 4500 | 2.9470 |
| 2.7226 | 3.33 | 5000 | 2.9418 |
| 2.6823 | 3.67 | 5500 | 2.9317 |
| 2.6445 | 4.0 | 6000 | 2.9389 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mlx-vision/wide_resnet101_2-mlxim
|
mlx-vision
| 2024-03-20T08:19:53Z | 7 | 0 |
mlx-image
|
[
"mlx-image",
"safetensors",
"mlx",
"vision",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1605.07146",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-02-24T09:29:27Z |
---
license: apache-2.0
tags:
- mlx
- mlx-image
- vision
- image-classification
datasets:
- imagenet-1k
library_name: mlx-image
---
# Wide ResNet101 2
WideResNet101 2 is a computer vision model trained on imagenet-1k representing an improvement of ResNet architecture. It was introduced in the paper [Wide Residual Networks](https://arxiv.org/abs/1605.07146).
Disclaimer: This is a porting of the torchvision model weights to Apple MLX Framework.
## How to use
```bash
pip install mlx-image
```
Here is how to use this model for image classification:
```python
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
model = create_model("resnet18")
model.eval()
logits = model(x)
```
You can also use the embeds from last conv layer:
```python
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
# first option
model = create_model("wide_resnet101_2", num_classes=0)
model.eval()
embeds = model(x)
# second option
model = create_model("wide_resnet101_2")
model.eval()
embeds = model.get_features(x)
```
## Model Comparison
Explore the metrics of this model in [mlx-image model results](https://github.com/riccardomusmeci/mlx-image/blob/main/results/results-imagenet-1k.csv).
|
mlx-vision/resnet152-mlxim
|
mlx-vision
| 2024-03-20T08:19:21Z | 9 | 0 |
mlx-image
|
[
"mlx-image",
"safetensors",
"mlx",
"vision",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1512.03385",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-02-23T16:45:55Z |
---
license: apache-2.0
tags:
- mlx
- mlx-image
- vision
- image-classification
datasets:
- imagenet-1k
library_name: mlx-image
---
# ResNet152
ResNet152 is a computer vision model trained on imagenet-1k. It was introduced in the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) and first released in [this repository](https://github.com/KaimingHe/deep-residual-networks).
Disclaimer: This is a porting of the torchvision model weights to Apple MLX Framework.
## How to use
```bash
pip install mlx-image
```
Here is how to use this model for image classification:
```python
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
model = create_model("resnet152")
model.eval()
logits = model(x)
```
You can also use the embeds from last conv layer:
```python
from mlxim.model import create_model
from mlxim.io import read_rgb
from mlxim.transform import ImageNetTransform
transform = ImageNetTransform(train=False, img_size=224)
x = transform(read_rgb("cat.png"))
x = mx.expand_dims(x, 0)
# first option
model = create_model("resnet152", num_classes=0)
model.eval()
embeds = model(x)
# second option
model = create_model("resnet152")
model.eval()
embeds = model.get_features(x)
```
## Model Comparison
Explore the metrics of this model in [mlx-image model results](https://github.com/riccardomusmeci/mlx-image/blob/main/results/results-imagenet-1k.csv).
|
second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF
|
second-state
| 2024-03-20T08:18:27Z | 63 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation",
"base_model:DrNicefellow/ChatAllInOne-Yi-34B-200K-V1",
"base_model:quantized:DrNicefellow/ChatAllInOne-Yi-34B-200K-V1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-14T12:51:58Z |
---
base_model: DrNicefellow/ChatAllInOne-Yi-34B-200K-V1
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
model_creator: DrNicefellow
model_name: ChatAllInOne-Yi-34B-200K-V1
pipeline_tag: text-generation
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# ChatAllInOne-Yi-34B-200K-V1-GGUF
## Original Model
[DrNicefellow/ChatAllInOne-Yi-34B-200K-V1](https://huggingface.co/DrNicefellow/ChatAllInOne-Yi-34B-200K-V1)
## Run with LlamaEdge
- LlamaEdge version: coming soon
- Prompt template
- Prompt type: `vicuna-1.1-chat`
- Prompt string
```text
USER: {prompt}
ASSISTANT:
```
- Context size: `7168`
<!-- - Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:openchat-3.5-0106-Q5_K_M.gguf llama-api-server.wasm -p openchat -r '<|end_of_turn|>'
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:openchat-3.5-0106-Q5_K_M.gguf llama-chat.wasm -p openchat -r '<|end_of_turn|>'
``` -->
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [ChatAllInOne-Yi-34B-200K-V1-Q2_K.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q2_K.gguf) | Q2_K | 2 | 12.8 GB| smallest, significant quality loss - not recommended for most purposes |
| [ChatAllInOne-Yi-34B-200K-V1-Q3_K_L.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q3_K_L.gguf) | Q3_K_L | 3 | 18.1 GB| small, substantial quality loss |
| [ChatAllInOne-Yi-34B-200K-V1-Q3_K_M.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q3_K_M.gguf) | Q3_K_M | 3 | 16.7 GB| very small, high quality loss |
| [ChatAllInOne-Yi-34B-200K-V1-Q3_K_S.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q3_K_S.gguf) | Q3_K_S | 3 | 15 GB| very small, high quality loss |
| [ChatAllInOne-Yi-34B-200K-V1-Q4_0.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q4_0.gguf) | Q4_0 | 4 | 19.5 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [ChatAllInOne-Yi-34B-200K-V1-Q4_K_M.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q4_K_M.gguf) | Q4_K_M | 4 | 20.7 GB| medium, balanced quality - recommended |
| [ChatAllInOne-Yi-34B-200K-V1-Q4_K_S.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q4_K_S.gguf) | Q4_K_S | 4 | 19.6 GB| small, greater quality loss |
| [ChatAllInOne-Yi-34B-200K-V1-Q5_0.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q5_0.gguf) | Q5_0 | 5 | 23.7 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [ChatAllInOne-Yi-34B-200K-V1-Q5_K_M.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q5_K_M.gguf) | Q5_K_M | 5 | 24.3 GB| large, very low quality loss - recommended |
| [ChatAllInOne-Yi-34B-200K-V1-Q5_K_S.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q5_K_S.gguf) | Q5_K_S | 5 | 23.7 GB| large, low quality loss - recommended |
| [ChatAllInOne-Yi-34B-200K-V1-Q6_K.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q6_K.gguf) | Q6_K | 6 | 28.2 GB| very large, extremely low quality loss |
| [ChatAllInOne-Yi-34B-200K-V1-Q8_0.gguf](https://huggingface.co/second-state/ChatAllInOne-Yi-34B-200K-V1-GGUF/blob/main/ChatAllInOne-Yi-34B-200K-V1-Q8_0.gguf) | Q8_0 | 8 | 36.5 GB| very large, extremely low quality loss - not recommended |
*Quantized with llama.cpp b2334*
|
Lewdiculous/Multi-Verse-RP-7B-GGUF-IQ-Imatrix
|
Lewdiculous
| 2024-03-20T08:15:48Z | 58 | 3 | null |
[
"gguf",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-20T04:18:33Z |
---
license: cc-by-4.0
---
GGUF-Imatrix quants of [saishf/Multi-Verse-RP-7B](https://huggingface.co/saishf/Multi-Verse-RP-7B/).
**Experimental.**

|
irokoy/setsuna
|
irokoy
| 2024-03-20T08:15:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-20T08:03:52Z |
最大は変なポーズとりがち
60を使用
|
second-state/StarCoder2-7B-GGUF
|
second-state
| 2024-03-20T08:12:57Z | 9,938 | 12 |
transformers
|
[
"transformers",
"gguf",
"starcoder2",
"text-generation",
"code",
"base_model:bigcode/starcoder2-7b",
"base_model:quantized:bigcode/starcoder2-7b",
"license:bigcode-openrail-m",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-03-02T07:35:41Z |
---
base_model: bigcode/starcoder2-7b
inference: false
license: bigcode-openrail-m
library_name: transformers
model_creator: bigcode
model_name: StarCoder2 7B
pipeline_tag: text-generation
quantized_by: Second State Inc.
tags:
- code
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# StarCoder2-7B-GGUF
## Original Model
[bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b)
## Run with LlamaEdge
- LlamaEdge version: coming soon
- Context size: `4608`
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [starcoder2-7b-Q2_K.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q2_K.gguf) | Q2_K | 2 | 2.72 GB| smallest, significant quality loss - not recommended for most purposes |
| [starcoder2-7b-Q3_K_L.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q3_K_L.gguf) | Q3_K_L | 3 | 3.99 GB| small, substantial quality loss |
| [starcoder2-7b-Q3_K_M.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q3_K_M.gguf) | Q3_K_M | 3 | 3.59 GB| very small, high quality loss |
| [starcoder2-7b-Q3_K_S.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q3_K_S.gguf) | Q3_K_S | 3 | 3.09 GB| very small, high quality loss |
| [starcoder2-7b-Q4_0.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q4_0.gguf) | Q4_0 | 4 | 4.04 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [starcoder2-7b-Q4_K_M.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q4_K_M.gguf) | Q4_K_M | 4 | 4.4 GB| medium, balanced quality - recommended |
| [starcoder2-7b-Q4_K_S.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q4_K_S.gguf) | Q4_K_S | 4 | 4.13 GB| small, greater quality loss |
| [starcoder2-7b-Q5_0.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q5_0.gguf) | Q5_0 | 5 | 4.94 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [starcoder2-7b-Q5_K_M.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q5_K_M.gguf) | Q5_K_M | 5 | 5.12 GB| large, very low quality loss - recommended |
| [starcoder2-7b-Q5_K_S.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q5_K_S.gguf) | Q5_K_S | 5 | 4.94 GB| large, low quality loss - recommended |
| [starcoder2-7b-Q6_K.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q6_K.gguf) | Q6_K | 6 | 5.89 GB| very large, extremely low quality loss |
| [starcoder2-7b-Q8_0.gguf](https://huggingface.co/second-state/StarCoder2-7B-GGUF/blob/main/starcoder2-7b-Q8_0.gguf) | Q8_0 | 8 | 7.63 GB| very large, extremely low quality loss - not recommended |
*Quantized with llama.cpp b2308*
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.