modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 18:27:28
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 532
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 18:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
NhatTranKKK/q-FrozenLake-v1-4x4-noSlippery
|
NhatTranKKK
| 2024-01-12T04:42:26Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T04:42:24Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NhatTranKKK/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wesley7137/TinyLlama-OpenHermes-MOE-MetaMath-Expert
|
wesley7137
| 2024-01-12T04:40:19Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-01-12T02:40:44Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
LR-AI-Labs/vbd-llama2-7B-50b-chat
|
LR-AI-Labs
| 2024-01-12T04:35:50Z | 95 | 25 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"vi",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-26T05:18:13Z |
---
license: llama2
language:
- en
- vi
---
<p align="center"> <img src="vbd_logo.png" width="600" /> </p>
<h1>VBD-LLaMA2-Chat - a Conversationally-tuned LLaMA2 for Vietnamese</h1>
(*Disclaimer 1: VBD-LLaMA family is an effort by VinBigData to support and promote research on LLM in Vietnam. This model is not related to the ViGPT/ViViChat or any other product operating at VinBigData*)
We release VBD-LLaMA2-7B-Chat, a finetuned model based on Meta's LLaMA2-7B specifically for the Vietnamese 🇻🇳 language. This is part of our effort to support the community in building Vietnamese Large Language Models (LLMs). The pretrained weight for this model was trained through continuous self-supervised learning (SSL) by extending LLaMA2's vocab on a corpus consisting of 100 billion Vietnamese 🇻🇳 tokens and 40 billion English 🇬🇧 tokens. This approach attempts to leverage the full potential of existing language models and adapt them to lower resource languages, thereby reduce the hardware, time, and data cost associated building LLMs for these languages. Subsequent supervised finetuning (SFT) was conducted using our internal SFT dataset, which consists of 2 million Vietnamese samples.
For this release, we are only including the pretrained weight and the SFT weight of our model's checkpoint, which was trained on 40b Vietnamese and 16b English tokens (56b tokens total).
<h3>Model weights:</h3>
- VBD-LLaMA2-7B-50b: the snapshot of the pretrained model after 40b Vietnamese tokens and 16b Enlgish tokens ((~50b tokens total))
- VBD-LLaMA2-7B-50b-Chat: a snapshot demonstrating the efficacy of the proposed methodology. This base model is pretrained on 40b Vietnamese tokens and 16b English tokens and SFT on 2 million samples.
<blockquote style="color:red"> <p><strong style="color: red">Terms of Use and License</strong>: By using our released weights, you agree to and comply with the terms and conditions specified in Meta's LLaMA-2 license.</blockquote>
**Disclaimer 2: While we have made considerable efforts to minimize misleading, inaccurate, and harmful content generation, it's important to acknowledge that our released model carries inherent risks. We strongly recommend utilizing this model exclusively within a closely supervised environment and/or conducting additional testing, red teaming, and alignment procedures. The utilization of this model must adhere to and comply with local governance and regulations. The authors of this model shall not be held liable for any claims, damages, or other liabilities arising from the use of the released weights..**
<h3>Pre-training Proposal</h3>
We propose to do continued pretraining of the 3/7/13 billion parameters large language models (LLaMA, Bloom, MPT, Falcon, etc) for the Vietnamese and English languages.
Our proposal involves conducting experiments to enhance the conversational capabilities of this model in Vietnamese while retaining its abilities in English. This will be achieved by transferring knowledge from the English latent space to the Vietnamese latent space.
Instead of training a Vietnamese LLM from scratch, we want to leverage the full potential of existing language models (in English) and transform it into Vietnamese. We aim to reduce hardware costs, time, and data in building language models for Vietnamese.
We intend to augment the original latent space of LLaMA/Bloom LLM by incorporating a Vietnamese latent space. We will then transfer knowledge between these two spaces and fine-tune self-supervised learning (SSL) using both English and Vietnamese unsupervised corpora.
With this model, we expect to make a significant contribution to the development of large language models in Vietnam, making it easier for Vietnamese people to access larger language models in-house. It will create a recipe for other low-resource languages to follow as well.
**Vietnamese language, methods, and research objectives**
We experiment adding the Vietnamese language into large language models that do not originally support Vietnamese. Our hypothesis is that is is feasible to transfer knowledge transfer between different languages utilizing the cross-lingual capabilities of large models to quickly develop a Vietnamese Language Model (LLM) with less training time, data, and computational resources.
**Our proposed methods:**
1. We will start with a English/multilingual large language model:
+ https://huggingface.co/meta-llama/Llama-2-7b-hf
2. We will rebuild the BPE-based tokenizers by preserving the original tokens and incorporating Vietnamese syllables.
3. We will transfer knowledge in the latent space by fine-tuning the `added latent space while freezing the original latent space. This step is conducted by using the En-Vi and Vi-En translation tasks.
4. Using the new latent space (original latent space + added latent space), we will fine-tune self-supervised learning (SSL) using 40B English tokens and 100B Vietnamese tokens of unsupervised corpora. (the number of tokens as the recent well-performing LLaMA models - of around 1-1.5T tokens.)
+ In this step, we use a special strategy called hybrid training. This allows the model to have better zero-shot/few-shot capabilities even if the model has not been SFT trained. This also enhance the model's capability to understand prompts with limited SFT.
5. The training time for the 3B model is roughly 8k GPU hours (roughly 44 days on GPU DGX 8 A100s 40GB), and 16k GPU hours for the 7B model (roughly 84 days on GPU DGX 8 A100s 40GB).
6. We will evaluate the model periodically to observe improvents and/or the possibility of early completion of the training progress.
<h3>Self-supervised Fine-Tuning (SFT)</h3>
We believe that Conversational-AI will be a significant interface for human-machine interaction in the next few years. Therefore, VBD-LLaMA2-7B-50b-Chat is finetuned on 2 million conversational data, in hopes that there will be more applications of LLMs in conversational systems in the near future.
In the following section, we document some of the benchmark of the released weight(s).
<h3>Evaluation</h3>
We evaluated our model via peer comparison on multiple publicly available dataset using
<a href="https://github.com/hieunguyen1053/lm-evaluation-harness"> @hieunguyen1053 fork of lm-evaluation-harness </a>
, and combine the results with that provided by the authors of VinaLLaMA. The results are bellow:
| Model | Model size | arc_vi (acc) | hellaswag_vi (acc) | mmlu_vi (acc) | truthfulqa_vi (acc) | Average |
| ---------------------- | ---------- | ------------ | ------------------ | ------------- | ------------------- | ------- |
| URA-LLaMA-13B | | 0,3752 | 0,4830 | 0,3973 | 0,4574 | 0,4282 |
| BLOOMZ-7B | | 0,3205 | 0,4930 | 0,3975 | 0,4523 | 0,4158 |
| PhoGPT-7B5-Instruct | | 0,2470 | 0,2578 | 0,2413 | 0,4759 | 0,3055 |
| SeaLLM-7B-chat | | 0,3607 | 0,5112 | 0,3339 | 0,4948 | 0,4252 |
| Vietcuna-7b-v3 | | 0,3419 | 0,4939 | 0,3354 | 0,4807 | 0,4130 |
| VinaLLaMA-2.7B-chat | | 0,3273 | 0,4814 | 0,3051 | 0,4972 | 0,4028 |
| VinaLLaMA-7B-chat | | 0,4239 | 0,5407 | 0,3932 | 0,5251 | 0,4707 |
| VBD-LLaMA2-7B-50b | | 0,3222 | 0,5195 | 0,2964 | 0,4614 | 0,3999 |
| VBD-LLaMA2-7B-50b-Chat | | 0,3585 | 0,5207 | 0,3444 | 0,5179 | 0,4354 |
<p align="center"> Table 1. Benchmark on Vietnamese datasets </p>
| Organization | Model | Model size | ARC (ACC) | HellaSwag (ACC) | LAMBADA (perplexity) | MMLU (ACC) |
| ------------ | ------------------ | ---------- | --------- | --------------- | -------------------- | ---------- |
| VLSP | hoa-7b | ~7B | 0,2722 | 0,4867 | 18,53 | |
| BK Lab | LLaMA-2-BK | ~7B | 0,4164 | 0,7216 | 5,010 | |
| ViLM | vietcuna-7b-v3 | ~7B | 0,3976 | 0,6309 | 7,125 | |
| BigScience | Bloomz-T0 | ~7B | 0,436 | 0,6401 | 6,542 | 0,3785 |
| TII | Falcon-7B-Instruct | ~7B | 0,4258 | 0,6976 | 7,463 | 0,2584 |
| MosaicML | MPT-7B-Chat | ~7B | 0,4258 | 0,7438 | 5,797 | 0,3762 |
| Meta | LLaMA-2-Chat | ~7B | 0,442 | 0,7547 | 3,968 | 0,4832 |
| AISingapore | Sealion7b | ~7B | 0,3422 | 0,6705 | 6,715 | 0,268 |
| VBD | VBD-LLaMA2-7B-50b-Chat | ~7B | 0,4556 | 0,7384 | 4,645 | 0,4558 |
<p align="center"> Table 2. Benchmark on English datasets </p>
Based on this results, our model performs on-par or better than most models for tasks in Vietnamese and demonstrate that this approach is extremely potential.
While this model primarily specializes in multi-turn conversational scenarios, it has demonstrated its competence in various multiple-choice question and answer tasks during testing. Below, you can find the results, fairly evaluated by the [VMLU team](https://vmlu.ai), in comparison to other open-source models, including VBD-LLaMA2-7B-50b-Chat. (We extend our gratitude to the VMLU team for their diligent work in creating an open-source public evaluation dataset).
<p align="center"> <img src="vmlu.png" width="500" /> </p>
<p align="center"> Table 3. <a href="https://vmlu.ai/leaderboard"> Benchmark on VMLU datasets </a> </p>
Pretraining loss:
<p align="center"> <img src="loss.png" width="500" /> </p>
<h3> Run the model </h3>
<h4> with Huggingface's transformers </h4>
```python
import torch
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
model_path = "LR-AI-Labs/vbd-llama2-7B-50b-chat"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16,
device_map='auto',
# load_in_8bit=True
)
model.eval()
SYS_PROMPT = "A chat between a curious user and an artificial intelligence assistant. "\
"The assistant gives helpful, detailed, and polite answers to the user's questions."
def response_generate(input_prompt):
input_ids = tokenizer(input_prompt, return_tensors="pt")
outputs = model.generate(
inputs=input_ids["input_ids"].to("cuda"),
attention_mask=input_ids["attention_mask"].to("cuda"),
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.9,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
response = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
response = response.split("ASSISTANT:")[-1].strip()
return response
print(response_generate(f"{SYS_PROMPT} USER: Xin chào, bạn là ai? ASSISTANT:"))
# Xin chào, ViVi là một trợ lý trí tuệ nhân tạo có thể trả lời câu hỏi của bạn và trò chuyện với bạn.
```
<h5>For single-turn chat:</h5>
```python
print(response_generate(f"{SYS_PROMPT} USER: Cách để nấu món phở ngon ASSISTANT:"))
```
```python
print(response_generate(f"{SYS_PROMPT} USER: Viết cho tôi một email xin sếp tăng lương ASSISTANT:"))
```
```python
print(response_generate(f'''{SYS_PROMPT} USER: Trả lời câu hỏi dựa vào thông tin lấy từ các đoạn văn sau. Nếu không có thông tin trả lời "Thật tiếc, câu hỏi của bạn Vivi không biết." Nếu có câu trả lời, yêu cầu trả ra cả nguồn chứa câu trả lời dạng "[đoạn k]."
Câu hỏi: {question}
Đoạn 0: {chunk_0}
Đoạn 1: {chunk_1}
Đoạn 2: {chunk_2}
Đoạn 3: {chunk_3}
Đoạn 4: {chunk_4} ASSISTANT:'''))
```
<h5>For conversation chat:</h5>
```python
turns = []
while True:
user_input = input("USER: ")
turns.append("USER: " + user_input)
bot_output = "ASSISTANT: " + response_generate("%s %s ASSISTANT:" %(SYS_PROMPT, " ".join(turns)))
print(bot_output)
turns.append(bot_output)
# USER: Xin chào
# ASSISTANT: Xin chào. Làm thế nào ViVi có thể giúp bạn?
# USER: Anh bị đau đầu quá
# ASSISTANT: Bạn có thể nói với ViVi nhiều hơn về cơn đau đầu của bạn?
# USER: anh bị đau nửa đầu về đêm, sáng dậy không tỉnh táo
# ASSISTANT: Điều quan trọng là bạn nhận thức được rằng cơn đau đầu của bạn không phải là điều hiếm gặp và nó có thể có nhiều nguyên nhân. Một số nguyên nhân phổ biến của cơn đau đầu là căng thẳng, thiếu ngủ, căng thẳng, chế độ ăn uống kém và mất nước.
# Nếu cơn đau đầu của bạn không giảm đi sau khi nghỉ ngơi và nghỉ ngơi, điều quan trọng là bạn nói chuyện với bác sĩ của bạn. Bác sĩ của bạn có thể giúp bạn xác định nguyên nhân của cơn đau đầu của bạn và đề xuất các phương pháp điều trị thích hợp.
# USER: có cách nào giảm đau tại nhà không?
# ASSISTANT: Có, có một số cách bạn có thể thử giảm đau đầu tại nhà:
# - Uống nhiều nước để giảm đau đầu do mất nước.
# - Nghỉ ngơi và ngủ nhiều hơn.
# - Áp dụng băng lạnh hoặc nóng vào khu vực đau.
# - Áp dụng thuốc giảm đau không kê đơn như ibuprofen hoặc paracetamol.
# - Thực hiện các kỹ thuật thư giãn như thiền, thở sâu hoặc yoga.
# - Massage khu vực bị ảnh hưởng.
# Điều quan trọng cần nhớ là trong khi các biện pháp khắc phục tại nhà có thể giúp giảm đau đầu, chúng không thay thế cho lời khuyên y tế chuyên nghiệp. Nếu cơn đau đầu của bạn vẫn tồn tại hoặc trở nên tồi tệ hơn, điều quan trọng là bạn nói chuyện với bác sĩ của bạn.
```
***Modify the parameters "temperature", "top_k", "top_p" to suit your usecase.***
<h3>Limitations and Future Research</h3>
The published model has certain limitations. For example, it performs poorly on tasks involving reasoning, coding or mathematics. In addition, the model will occasionally produce harmful, biased responses, or answer unsafe questions. Users should be cautious while interacting with VBD-LLaMA2-7B-50b-Chat and verify important information taken from the model's outputs because such infomation can be factually incorrect.
This model has been trained on and exhibits decent capability to tackle Vietnamese tasks, especially those associated with conversations. However, the model still struggles with questions related to Vietnamese history, culture, and society. We recommend some approaches to further improve this model:
+ Data Distillation: Construct a small dataset of local/in-domain knowledge to continuously train the model. You might find great ideas searching through the topic of domain adaptation too ;)
+ Merging/Combining/Ensembling Models: There have been numerous models developed based on Meta's LLaMA, so another approach might be to a training process similar to knowledge distilation, where the teacher consists of combinations of previously trained models.
+ RLHF/Alignment: The model has not been trained with RFHF or alignment techniques such as DPO.
+ Retrieval Augmented Generation (RAG): Combine the model with external knowledge sources.
<h3>Acknowledgements:</h3>
We would like to express our gratitude towards the Virtual Assistant Technology Center at VinBigData JSC. led by Dr. <a href="https://scholar.google.com.vn/citations?user=z3IDeu0AAAAJ&hl=vi"> Kim Anh Nguyen </a> for providing us with the necessary resources to deliver this project. We are also greatly indebted to our fellow colleagues at the Natural Language Processing Department at VinBigData, whose feedbacks and expertise had been of great help.
<h3>Citation</h3>
If you find our project useful, we hope you would kindly star our repo and cite our work as follows:
Corresponding Author:
+ v.quangph3@vinbigdata.com ([QuangPH](https://samsonph.github.io/))
+ v.kietbs@vinbigdata.com ([KietBS](https://github.com/ntdas/))
+ v.minhtt32@vinbigdata.com ([MinhTT](https://github.com/tanminhtran168/))
|
MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T04:34:02Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"WizardLM/WizardMath-7B-V1.1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T04:28:46Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- WizardLM/WizardMath-7B-V1.1
---
# WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.2-slerp
WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: WizardLM/WizardMath-7B-V1.1
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/WizardMath-7B-V1.1-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
ryusangwon/3118_Llama-2-13b-hf
|
ryusangwon
| 2024-01-12T04:32:54Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:cnn_dailymail",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:adapter:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-01-12T04:32:46Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: 3118_Llama-2-13b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 3118_Llama-2-13b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
flemmingmiguel/Distilled-HermesChat-7B
|
flemmingmiguel
| 2024-01-12T04:29:18Z | 1,375 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"openchat/openchat-3.5-0106",
"argilla/distilabeled-Hermes-2.5-Mistral-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T03:52:42Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- openchat/openchat-3.5-0106
- argilla/distilabeled-Hermes-2.5-Mistral-7B
---
# Distilled-HermesChat-7B
Distilled-HermesChat-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B)
As an experiment to find the best base merge to further fine-tuning, expect a lot of experiments named using parts of the component models until a clear winner emerges in the benchmarks
## 🧩 Configuration
```yaml
slices:
- sources:
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
- model: argilla/distilabeled-Hermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: openchat/openchat-3.5-0106
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "flemmingmiguel/Distilled-HermesChat-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
bsmsultani/lunerlander
|
bsmsultani
| 2024-01-12T04:25:46Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T03:45:44Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.30 +/- 19.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
calledice666/distilbert-base-uncased-finetuned-cola
|
calledice666
| 2024-01-12T04:17:50Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-11T11:32:08Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5338
- eval_matthews_correlation: 0.3316
- eval_runtime: 37.2679
- eval_samples_per_second: 27.987
- eval_steps_per_second: 1.771
- epoch: 0.3
- step: 160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liuyuweitarek/paraphrase-mpnet-base-neo-300
|
liuyuweitarek
| 2024-01-12T04:08:06Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-01-11T10:18:54Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# liuyuweitarek/paraphrase-mpnet-base-neo-300
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("liuyuweitarek/paraphrase-mpnet-base-neo-300")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
Wajid333/PPO-Practice
|
Wajid333
| 2024-01-12T04:05:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T04:03:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.58 +/- 20.52
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
akashmaggon/bert-base-uncased-machinehackathon
|
akashmaggon
| 2024-01-12T04:00:25Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2024-01-12T03:41:09Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
amd/SemanticFPN
|
amd
| 2024-01-12T03:54:37Z | 0 | 1 | null |
[
"onnx",
"RyzenAI",
"Image Segmentation",
"Pytorch",
"Vision",
"en",
"dataset:cityscape",
"arxiv:1901.02446",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T16:30:08Z |
---
license: apache-2.0
tags:
- RyzenAI
- Image Segmentation
- Pytorch
- Vision
datasets:
- cityscape
language:
- en
Metircs:
- mIoU
---
# SemanticFPN model trained on cityscapes
SemanticFPN is a conceptually simple yet effective baseline for panoptic segmentation trained on cityscapes. The method starts with Mask R-CNN with FPN and adds to it a lightweight semantic segmentation branch for dense-pixel prediction. It was introduced in the paper [Panoptic Feature Pyramid Networks in 2019](https://arxiv.org/pdf/1901.02446.pdf) by Kirillov, Alexander, et al.
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com).
## Model description
SemanticFPN is a single network that unifies the tasks of instance segmentation and semantic segmentation. The network is designed by endowing Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. This simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. It is a robust and accurate baseline for both tasks and can serve as a strong baseline for future research in panoptic segmentation.
## Intended uses & limitations
You can use the raw model for image segmentation. See the [model hub](https://huggingface.co/models?sort=trending&search=amd%2FSemanticFPN) to look for all available SemanticFPN models.
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation (optional: for accuracy evaluation)
1. Download cityscapes dataset (https://www.cityscapes-dataset.com/downloads)
- grundtruth folder: gtFine_trainvaltest.zip [241MB]
- image folder: leftImg8bit_trainvaltest.zip [11GB]
2. Organize the dataset directory as follows:
```Plain
└── data
└── cityscapes
├── leftImg8bit
| ├── train
| └── val
└── gtFine
├── train
└── val
```
### Test & Evaluation
- Code snippet from [`infer_onnx.py`](infer_onnx.py) on how to use
```python
parser = argparse.ArgumentParser(description='SemanticFPN model')
parser.add_argument('--onnx_path', type=str, default='FPN_int_NHWC.onnx')
parser.add_argument('--save_path', type=str, default='./data/demo_results/senmatic_results.png')
parser.add_argument('--input_path', type=str, default='data/cityscapes/cityscapes/leftImg8bit/test/bonn/bonn_000000_000019_leftImg8bit.png')
parser.add_argument('--ipu', action='store_true',
help='use ipu')
parser.add_argument('--provider_config', type=str, default=None,
help='provider config path')
args = parser.parse_args()
if args.ipu:
providers = ["VitisAIExecutionProvider"]
provider_options = [{"config_file": args.provider_config}]
else:
providers = ['CPUExecutionProvider']
provider_options = None
onnx_path = args.onnx_path
input_img = build_img(args)
session = onnxruntime.InferenceSession(onnx_path, providers=providers, provider_options=provider_options)
ort_input = {session.get_inputs()[0].name: input_img.cpu().numpy()}
ort_output = session.run(None, ort_input)[0]
if isinstance(ort_output, (tuple, list)):
ort_output = ort_output[0]
output = ort_output[0].transpose(1, 2, 0)
seg_pred = np.asarray(np.argmax(output, axis=2), dtype=np.uint8)
color_mask = colorize_mask(seg_pred)
color_mask.save(args.save_path)
```
- Run inference for a single image
```python
python infer_onnx.py --onnx_path FPN_int_NHWC.onnx --input_path /Path/To/Your/Image --ipu --provider_config Path/To/vaip_config.json
```
- Test accuracy of the quantized model
```python
python test_onnx.py --onnx_path FPN_int_NHWC.onnx --dataset citys --test-folder ./data/cityscapes --crop-size 256 --ipu --provider_config Path/To/vaip_config.json
```
### Performance
| model | input size | FLOPs | mIoU on Cityscapes Validation|
|-------|------------|--------------|-------|
| SemanticFPN(ResNet18)| 256x512 | 10G | 62.9% |
| model | input size | FLOPs | INT8 mIoU on Cityscapes Validation|
|-------|------------|---------------|--------------|
| SemanticFPN(ResNet18)| 256x512 | 10G | 62.5% |
```bibtex
@inproceedings{kirillov2019panoptic,
title={Panoptic feature pyramid networks},
author={Kirillov, Alexander and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={6399--6408},
year={2019}
}
```
|
amd/rcan
|
amd
| 2024-01-12T03:54:30Z | 0 | 0 | null |
[
"onnx",
"RyzenAI",
"Super Resolution",
"Pytorch",
"Vision",
"SISR",
"en",
"dataset:Set5",
"dataset:Div2k",
"arxiv:1807.02758",
"license:apache-2.0",
"region:us"
] | null | 2023-12-04T16:30:53Z |
---
license: apache-2.0
tags:
- RyzenAI
- Super Resolution
- Pytorch
- Vision
- SISR
datasets:
- Set5
- Div2k
language:
- en
Metircs:
- PSNR
---
# RCAN model trained on DIV2K
RCAN is a very deep residual channel attention network for super resolution trained on DIV2K. It was introduced in the paper [Image Super-Resolution Using Very Deep Residual Channel Attention Networks in 2018](https://arxiv.org/abs/1807.02758) by Yulun Zhang et al. and first released in [this repository](https://github.com/yulunzhang/RCAN).
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com).
## Model description
RCAN is an advanced algorithm for single image super resolution. Our modified version is smaller than the original version. It is based deep learning techniques and is capable of X2 super resolution.
## Intended uses & limitations
You can use the raw model for super resolution. See the [model hub](https://huggingface.co/models?sort=trending&search=amd%2Frcan) to look for all available RCAN models.
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation (optional: for accuracy evaluation)
1. Download the benchmark(https://cv.snu.ac.kr/research/EDSR/benchmark.tar) dataset.
2. Organize the dataset directory as follows:
```Plain
└── dataset
└── benchmark
├── Set5
├── HR
| ├── baby.png
| ├── ...
└── LR_bicubic
└──X2
├──babyx2.png
├── ...
├── Set14
├── ...
```
### Test & Evaluation
- Code snippet from [`infer_onnx.py`](infer_onnx.py) on how to use
```python
parser = argparse.ArgumentParser(description='RCAN SISR')
parser.add_argument('--onnx_path', type=str, default='RCAN_int8_NHWC.onnx',
help='onnx path')
parser.add_argument('--image_path', default='test_data/test.png',
help='path of your image')
parser.add_argument('--output_path', default='test_data/sr.png',
help='path of your image')
parser.add_argument('--ipu', action='store_true',
help='use ipu')
parser.add_argument('--provider_config', type=str, default=None,
help='provider config path')
args = parser.parse_args()
if args.ipu:
providers = ["VitisAIExecutionProvider"]
provider_options = [{"config_file": args.provider_config}]
else:
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
provider_options = None
onnx_file_name = args.onnx_path
image_path = args.image_path
output_path = args.output_path
ort_session = onnxruntime.InferenceSession(onnx_file_name, providers=providers, provider_options=provider_options)
lr = cv2.imread(image_path)[np.newaxis,:,:,:].transpose((0,3,1,2)).astype(np.float32)
sr = tiling_inference(ort_session, lr, 8, (56, 56))
sr = np.clip(sr, 0, 255)
sr = sr.squeeze().transpose((1,2,0)).astype(np.uint8)
sr = cv2.imwrite(output_path, sr)
```
- Run inference for a single image
```python
python infer_onnx.py --onnx_path RCAN_int8_NHWC.onnx --image_path /Path/To/Your/Image --ipu --provider_config Path/To/vaip_config.json
```
- Test accuracy of the quantized model
```python
python eval_onnx.py --onnx_path RCAN_int8_NHWC.onnx --data_test Set5 --ipu --provider_config Path/To/vaip_config.json
```
### Performance
| Method | Scale | Flops | Set5 |
|------------|-------|-------|--------------|
|RCAN-S (float) |X2 |24.5G |37.531 / 0.958|
|RCAN-S (INT8) |X2 |24.5G |37.150 / 0.955|
- Note: the Flops is calculated with the output resolution is 360x640
```bibtex
@inproceedings{zhang2018image,
title={Image super-resolution using very deep residual channel attention networks},
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Wang, Lichen and Zhong, Bineng and Fu, Yun},
booktitle={Proceedings of the European conference on computer vision (ECCV)},
pages={286--301},
year={2018}
}
```
|
amd/mobilenet_v2_1.0_224
|
amd
| 2024-01-12T03:53:51Z | 0 | 0 | null |
[
"onnx",
"RyzenAI",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1801.04381",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2023-12-04T09:28:36Z |
---
license: apache-2.0
tags:
- RyzenAI
- image-classification
- onnx
datasets:
- imagenet-1k
metrics:
- accuracy
---
## MobileNetV2
MobileNetV2 is an image classification model pre-trained on ImageNet-1k dataset at resolution 224x224. It was introduced in the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler et al. and first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
## Model description
MobileNetV2 is a simple network architecture that allows to build a family of highly efficient mobile models. It allows memory-efficient inference. MobileNetV2 is a model typically used for image classification tasks. And also can be used for object detection and image segmentation tasks. All tasks show competitive results.
The model is named **mobilenet_v2_depth_size**, for example, **mobilenet_v2_1.4_224**, where **1.4** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
## How to use
### Installation
1. Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
2. Run the following script to install pre-requisites for this model.
```shell
pip install -r requirements.txt
```
### Test & Evaluation
- Inference one image (Image Classification):
```python
import sys
import onnxruntime
import torch
import torchvision.transforms as transforms
from PIL import Image
image_path = sys.argv[1]
onnx_model = sys.argv[2]
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
img_transformer = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize])
img_tensor = img_transformer(Image.open(image_path)).unsqueeze(0)
img_tensor = torch.permute(img_tensor, (0, 2, 3, 1))
so = onnxruntime.SessionOptions()
ort_session = onnxruntime.InferenceSession(
onnx_model, so,
providers=['CPUExecutionProvider'],
provider_options=None)
input = img_tensor.numpy()
ort_input = {ort_session.get_inputs()[0].name: input}
output = ort_session.run(None, ort_input)
top5_probabilities, top5_class_indices = torch.topk(torch.nn.functional.softmax(torch.tensor(output[0])), k=5)
```
- Evaluate ImageNet validation dataset (50,000 Images), using `eval_onnx.py` .
- Test accuracy of the quantized model on CPU.
```shell
python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH}
```
- Test accuracy of the quantized model on IPU.
```shell
python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH} --ipu --provider_config Path\To\vaip_config.json
```
- Users can use `vaip_config.json` in folder `voe-4.0-win_amd64` of `ryzen-ai-sw-1.0.zip` file.
`DATA_PATH`: Path to ImageNet dataset where contains the `validation` folder.
### Performance
Dataset: ImageNet validation dataset (50,000 images).
| Metric | Accuracy on IPU |
| :-----------------: | :-------------: |
| top1& top5 accuracy | 75.62% / 92.52% |
## Citation
```bibtex
@article{MobileNet v2,
author = {Mark Sandler and
Andrew G. Howard and
Menglong Zhu and
Andrey Zhmoginov and
Liang{-}Chieh Chen},
title = {MobileNetV2: Inverted Residuals and Linear Bottlenecks},
year = {2018},
url = {http://arxiv.org/abs/1801.04381},
}
```
|
liuyuweitarek/all-MiniLM-L12-neo-300
|
liuyuweitarek
| 2024-01-12T03:52:58Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-01-11T10:19:31Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# liuyuweitarek/all-MiniLM-L12-neo-300
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("liuyuweitarek/all-MiniLM-L12-neo-300")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
MayaPH/GodziLLa2-70B
|
MayaPH
| 2024-01-12T03:52:58Z | 1,591 | 38 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"merge",
"mix",
"cot",
"dataset:mlabonne/guanaco-llama2-1k",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:1803.05457",
"arxiv:1905.07830",
"arxiv:2109.07958",
"arxiv:1907.10641",
"arxiv:2110.14168",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-08-10T17:05:37Z |
---
pipeline_tag: text-generation
license: llama2
inference: false
tags:
- merge
- mix
- cot
datasets:
- mlabonne/guanaco-llama2-1k
---

Released August 11, 2023
## Model Description
GodziLLa 2 70B is an experimental combination of various proprietary LoRAs from Maya Philippines and [Guanaco LLaMA 2 1K dataset](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k), with LLaMA 2 70B. This model's primary purpose is to stress test the limitations of composite, instruction-following LLMs and observe its performance with respect to other LLMs available on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). This model debuted in the leaderboard at rank #4 (August 17, 2023), debuted in the Fall 2023 update at rank #2 (November, 10, 2023), and operates under the Llama 2 license.

## Open LLM Leaderboard Metrics (Fall 2023 update)
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 69.88 |
| ARC (25-shot) | 71.42 |
| HellaSwag (10-shot) | 87.53 |
| TruthfulQA (0-shot) | 61.54 |
| Winogrande (5-shot) | 83.19 |
| GSM8K (5-shot) | 43.21 |
| DROP (3-shot) | 52.31 |
| Average (w/ DROP) | 67.01 |
| Average (w/o DROP) | 69.46 |
Note: As of December 1, 2023, [DROP](https://arxiv.org/abs/1903.00161) is removed from the leaderboard benchmarks.
According to the leaderboard description, here are the benchmarks used for the evaluation:
- [MMLU](https://arxiv.org/abs/2009.03300) (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.
- [AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) -ARC- (25-shot) - a set of grade-school science questions.
- [HellaSwag](https://arxiv.org/abs/1905.07830) (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models.
- [TruthfulQA](https://arxiv.org/abs/2109.07958) (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online.
- [Winogrande](https://arxiv.org/abs/1907.10641) (5-shot) - an adversarial and difficult Winograd benchmark at scale, for commonsense reasoning.
- [GSM8k](https://arxiv.org/abs/2110.14168) (5-shot) - diverse grade school math word problems to measure a model's ability to solve multi-step mathematical reasoning problems.
- [DROP](https://arxiv.org/abs/1903.00161) (3-shot) - English reading comprehension benchmark requiring Discrete Reasoning Over the content of Paragraphs.
A detailed breakdown of the evaluation can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MayaPH__GodziLLa2-70B). Huge thanks to [@thomwolf](https://huggingface.co/thomwolf).
## Open LLM Leaderboard Metrics (before Fall 2023 update)
| Metric | Value |
|-----------------------|-------|
| MMLU (5-shot) | 69.88 |
| ARC (25-shot) | 71.42 |
| HellaSwag (10-shot) | 87.53 |
| TruthfulQA (0-shot) | 61.54 |
| Average | 72.59 |
## Leaderboard Highlights (Fall 2023 update, November 10, 2023)
- Godzilla 2 70B debuts at 2nd place worldwide in the newly updated Open LLM Leaderboard.
- Godzilla 2 70B beats GPT-3.5 (ChatGPT) in terms of average performance and the HellaSwag benchmark (87.53 > 85.5).
- Godzilla 2 70B outperforms GPT-3.5 (ChatGPT) and GPT-4 on the TruthfulQA benchmark (61.54 for G2-70B, 47 for GPT-3.5, 59 for GPT-4).
- Godzilla 2 70B is on par with GPT-3.5 (ChatGPT) on the MMLU benchmark (<0.12%).
*Based on a [leaderboard clone](https://huggingface.co/spaces/gsaivinay/open_llm_leaderboard) with GPT-3.5 and GPT-4 included.
### Reproducing Evaluation Results
*Instruction template taken from [Platypus 2 70B instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct).
Install LM Evaluation Harness:
```
# clone repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# change to repo directory
cd lm-evaluation-harness
# check out the correct commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# install
pip install -e .
```
ARC:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/G270B/arc_challenge_25shot.json --device cuda --num_fewshot 25
```
HellaSwag:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/G270B/hellaswag_10shot.json --device cuda --num_fewshot 10
```
MMLU:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/G270B/mmlu_5shot.json --device cuda --num_fewshot 5
```
TruthfulQA:
```
python main.py --model hf-causal-experimental --model_args pretrained=MayaPH/GodziLLa2-70B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/G270B/truthfulqa_0shot.json --device cuda
```
### Prompt Template
```
### Instruction:
<prompt> (without the <>)
### Response:
```
## Technical Considerations
When using GodziLLa 2 70B, kindly take note of the following:
- The default precision is `fp32`, and the total file size that would be loaded onto the RAM/VRAM is around 275 GB. Consider using a lower precision (fp16, int8, int4) to save memory.
- To further save on memory, set the `low_cpu_mem_usage` argument to True.
- If you wish to use a quantized version of GodziLLa2-70B, you can either access TheBloke's [GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ) or [GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML) version of GodziLLa2-70B.
- [GodziLLa2-70B-GPTQ](https://huggingface.co/TheBloke/GodziLLa2-70B-GPTQ#description) is available in 4-bit and 3-bit
- [GodziLLa2-70B-GGML](https://huggingface.co/TheBloke/GodziLLa2-70B-GGML#provided-files) is available in 8-bit, 6-bit, 5-bit, 4-bit, 3-bit, and 2-bit
## Ethical Considerations
When using GodziLLa 2 70B, it is important to consider the following ethical considerations:
1. **Privacy and Security:** Avoid sharing sensitive personal information while interacting with the model. The model does not have privacy safeguards, so exercise caution when discussing personal or confidential matters.
2. **Fairness and Bias:** The model's responses may reflect biases present in the training data. Be aware of potential biases and make an effort to evaluate responses critically and fairly.
3. **Transparency:** The model operates as a predictive text generator based on patterns learned from the training data. The model's inner workings and the specific training data used are proprietary and not publicly available.
4. **User Responsibility:** Users should take responsibility for their own decisions and not solely rely on the information provided by the model. Consult with the appropriate professionals or reliable sources for specific advice or recommendations.
5. **NSFW Content:** The model is a merge of various datasets and LoRA adapters. It is highly likely that the resulting model contains uncensored content that may include, but is not limited to, violence, gore, explicit language, and sexual content. If you plan to further refine this model for safe/aligned usage, you are highly encouraged to implement guardrails along with it.
## Further Information
For additional information or inquiries about GodziLLa 2 70B, please contact the Maya Philippines iOps Team via jasper.catapang@maya.ph.
## Disclaimer
GodziLLa 2 70B is an AI language model from Maya Philippines. It is provided "as is" without warranty of any kind, express or implied. The model developers and Maya Philippines shall not be liable for any direct or indirect damages arising from the use of this model.
## Acknowledgments
The development of GodziLLa 2 70B was made possible by Maya Philippines and the curation of the various proprietary datasets and creation of the different proprietary LoRA adapters. Special thanks to mlabonne for the Guanaco dataset found [here](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k). Last but not least, huge thanks to [TheBloke](https://huggingface.co/TheBloke) for the quantized models, making our model easily accessible to a wider community.
|
Jaehyeon222/M-SOLAR-10.7B-v1.0-DPO
|
Jaehyeon222
| 2024-01-12T03:44:25Z | 2,247 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:maywell/ko_Ultrafeedback_binarized",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T01:08:14Z |
---
license: cc-by-nc-4.0
datasets:
- maywell/ko_Ultrafeedback_binarized
---
Model Card for M-SOLAR-10.7B-v1.0-DPO
Developed by : 메가스터디교육, 프리딕션, 마이스
Base Model : jjourney1125/M-SOLAR-10.7B-v1.0
사용 데이터셋 : maywell님의 ko_Ultrafeedback_binarized 데이터셋을 활용했습니다.
|
wesley7137/TinyLlama-OpenHermes-MOE-DolphiCoder-Expert-v1
|
wesley7137
| 2024-01-12T03:43:26Z | 0 | 0 |
peft
|
[
"peft",
"llama",
"region:us"
] | null | 2024-01-12T02:44:05Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
MaziyarPanahi/SynthIA-7B-v1.3-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T03:39:50Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"uukuguy/SynthIA-7B-v1.3-dare-0.85",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T03:34:51Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- uukuguy/SynthIA-7B-v1.3-dare-0.85
---
# SynthIA-7B-v1.3-dare-0.85-Mistral-7B-Instruct-v0.2-slerp
SynthIA-7B-v1.3-dare-0.85-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [uukuguy/SynthIA-7B-v1.3-dare-0.85](https://huggingface.co/uukuguy/SynthIA-7B-v1.3-dare-0.85)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: uukuguy/SynthIA-7B-v1.3-dare-0.85
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/SynthIA-7B-v1.3-dare-0.85-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
p1atdev/pvcxl-v1-lora
|
p1atdev
| 2024-01-12T03:32:28Z | 5 | 5 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"dataset:p1atdev/pvc",
"base_model:cagliostrolab/animagine-xl-3.0-base",
"base_model:adapter:cagliostrolab/animagine-xl-3.0-base",
"license:other",
"region:us"
] |
text-to-image
| 2024-01-12T02:20:22Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
1girl, medium hair, gothic dress, there are many red flowers in the room,
red theme, upper body, looking at viewer, masterpiece, best quality, newest,
late
parameters:
negative_prompt: >-
nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality, low
quality, normal quality, jpeg artifacts, signature, watermark, username,
blurry, artist name
output:
url: images/animaginexl3.0__00625_.png
- text: >-
pvc figure of 1girl, cat hears, blue hair, parka, hood on, shorts, dark
atmosphere, smoke around, looking at viewer, masterpiece, best quality,
newest, late
parameters:
negative_prompt: >-
flat color, nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text,
error, missing fingers, extra digit, fewer digits, cropped, worst quality,
low quality, normal quality, jpeg artifacts, signature, watermark,
username, blurry, artist name
output:
url: images/animaginexl3.0__00688_.png
- text: >-
1girl, cat ears, petite, blue hair, parted bangs, white dress, shirt,
wariza, sitting, dynamic angle, light smile, head tilt, looking at viewer,
masterpiece, best quality, newest, late
parameters:
negative_prompt: >-
nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality, low
quality, normal quality, jpeg artifacts, signature, watermark, username,
blurry, artist name
output:
url: images/animaginexl3.0__00690_.png
- text: >-
1girl, bangs, bare shoulders, beret, black hair, black shorts, blue hair,
bracelet, breasts, buttons, colored inner hair, double-breasted, eyewear
removed, green headwear, green jacket, grey eyes, grey sky, hat, jacket,
jewelry, long hair, looking at viewer, multicolored hair, neck ring, o-ring,
off shoulder, rain, round eyewear, shorts, sidelocks, small breasts, solo,
sunglasses, wavy hair, wet, zipper, masterpiece, best quality, newest, late
parameters:
negative_prompt: >-
nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality, low
quality, normal quality, jpeg artifacts, signature, watermark, username,
blurry, artist name
output:
url: images/animaginexl3.0__00695_.png
- text: >-
1girl, black hair, long hair, red scarf, trench coat, reaching towards
viewer, looking at viewer, snowy, bokeh, masterpiece, best quality, newest,
late
parameters:
negative_prompt: >-
nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality, low
quality, normal quality, jpeg artifacts, signature, watermark, username,
blurry, artist name
output:
url: images/animaginexl3.0__00696_.png
- text: >-
1girl, red hair, cat ears, closed eyes, closed mouth, expressionless, maid
costume, apron, lolita dress, frills, lying on back, sheets, red flowers,
from above, masterpiece, best quality, newest, late
parameters:
negative_prompt: >-
nsfw, rating:sensitive, lowres, bad anatomy, bad hands, text, error,
missing fingers, extra digit, fewer digits, cropped, worst quality, low
quality, normal quality, jpeg artifacts, signature, watermark, username,
blurry, artist name
output:
url: images/animaginexl3.0__00698_.png
base_model: cagliostrolab/animagine-xl-3.0-base
instance_prompt: null
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
datasets:
- p1atdev/pvc
---
# pvcxl-v1-lora
PVC style LoRA trained on [cagliostrolab/animagine-xl-3.0-base](https://huggingface.co/cagliostrolab/animagine-xl-3.0-base).
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/p1atdev/pvcxl-v1-lora/tree/main) them in the Files & versions tab.
## Training details
### Dataset
Curated high quality 2.5k pvc figure images from [p1atdev/pvc](https://huggingface.co/datasets/p1atdev/pvc)
[p1atdev/pvc-quality-swinv2-base](https://huggingface.co/p1atdev/pvc-quality-swinv2-base) was used to curate images.
### Training Config
|Configuration Item||
|:-|-|
|IaaS|Compute Engine of Google Cloud Platform|
|Machine type|g2-standard-8 (8 vCPU, 32 GB RAM)|
|GPU|1 x NVIDIA L4|
|Dataset size|2576 images|
|Batch size|4|
|Training steps|1000+5000 (crashed and resumed)|
|Train text encoder|False|
|Image resolution|1024|
|Optimizer|AdaFactor|
|Learning rate|constant 1e-5 with 100 steps of warmup|
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_GrounTruth_withPrompt_Seed105
|
behzadnet
| 2024-01-12T03:09:15Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-12T03:09:13Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_GrounTruth_withPrompt_Seed105
|
behzadnet
| 2024-01-12T03:09:07Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-12T03:09:02Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T03:03:52Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"cognitivecomputations/samantha-mistral-instruct-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:58:45Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- cognitivecomputations/samantha-mistral-instruct-7b
---
# samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp
samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [cognitivecomputations/samantha-mistral-instruct-7b](https://huggingface.co/cognitivecomputations/samantha-mistral-instruct-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: cognitivecomputations/samantha-mistral-instruct-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/samantha-mistral-instruct-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Deema/AceGPT-7B-alpagasus_ar
|
Deema
| 2024-01-12T02:58:11Z | 10 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"text-generation",
"ar",
"dataset:arbml/alpagasus_cleaned_ar",
"region:us"
] |
text-generation
| 2024-01-12T02:43:32Z |
---
library_name: peft
language:
- ar
datasets:
- arbml/alpagasus_cleaned_ar
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
MaziyarPanahi/speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:53:29Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"uukuguy/speechless-code-mistral-orca-7b-v1.0",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:48:33Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- uukuguy/speechless-code-mistral-orca-7b-v1.0
---
# speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.2-slerp
speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [uukuguy/speechless-code-mistral-orca-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-orca-7b-v1.0)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: uukuguy/speechless-code-mistral-orca-7b-v1.0
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/speechless-code-mistral-orca-7b-v1.0-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:42:07Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"maywell/Mini_Synatra_SFT",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:37:05Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- maywell/Mini_Synatra_SFT
---
# Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp
Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [maywell/Mini_Synatra_SFT](https://huggingface.co/maywell/Mini_Synatra_SFT)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: maywell/Mini_Synatra_SFT
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mini_Synatra_SFT-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
luiki/distilbert-base-uncased-finetuned-emotion
|
luiki
| 2024-01-12T02:41:09Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-11T07:07:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9224682764367261
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2199
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8646 | 1.0 | 250 | 0.3432 | 0.906 | 0.9045 |
| 0.2602 | 2.0 | 500 | 0.2199 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
fz2/t5-small-finetuned-xsum-zz
|
fz2
| 2024-01-12T02:34:36Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T02:18:57Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_keras_callback
model-index:
- name: fz2/t5-small-finetuned-xsum-zz
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fz2/t5-small-finetuned-xsum-zz
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.9917
- Validation Loss: 2.6503
- Train Rouge1: 25.2337
- Train Rouge2: 6.0997
- Train Rougel: 19.8280
- Train Rougelsum: 19.8418
- Train Gen Len: 18.7549
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.9917 | 2.6503 | 25.2337 | 6.0997 | 19.8280 | 19.8418 | 18.7549 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
EddyGiusepe/zephyr-support-chatbot
|
EddyGiusepe
| 2024-01-12T02:29:45Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/zephyr-7B-alpha-GPTQ",
"base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
"license:mit",
"region:us"
] | null | 2023-10-22T18:40:10Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-support-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-t5-large-sum
|
tgoktug
| 2024-01-12T02:29:09Z | 2 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T02:10:02Z |
---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-t5-large-sum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-t5-large-sum
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3447
- Validation Loss: 0.5270
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.6431 | 0.5291 | 0 |
| 0.5063 | 0.5046 | 1 |
| 0.4337 | 0.4953 | 2 |
| 0.3809 | 0.4903 | 3 |
| 0.3447 | 0.5270 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
categoricallydiesel/person
|
categoricallydiesel
| 2024-01-12T02:26:24Z | 0 | 0 | null |
[
"text2text-generation",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:LDJnr/Capybara",
"license:llama2",
"region:us"
] |
text2text-generation
| 2024-01-12T02:23:28Z |
---
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- LDJnr/Capybara
language:
- en
metrics:
- bertscore
- accuracy
- character
pipeline_tag: text2text-generation
---
|
MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:20:52Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"migtissera/SynthIA-7B-v1.5",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:15:40Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- migtissera/SynthIA-7B-v1.5
---
# SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.2-slerp
SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [migtissera/SynthIA-7B-v1.5](https://huggingface.co/migtissera/SynthIA-7B-v1.5)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: migtissera/SynthIA-7B-v1.5
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/SynthIA-7B-v1.5-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T02:07:01Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"cognitivecomputations/samantha-1.2-mistral-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T02:01:31Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- cognitivecomputations/samantha-1.2-mistral-7b
---
# samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp
samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [cognitivecomputations/samantha-1.2-mistral-7b](https://huggingface.co/cognitivecomputations/samantha-1.2-mistral-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: cognitivecomputations/samantha-1.2-mistral-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/samantha-1.2-mistral-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
saddad/criley
|
saddad
| 2024-01-12T02:03:33Z | 20 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-12T02:03:24Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A photo of <s0><s1> a man holding up a passport and an eagle
output:
url: image-0.png
- text: A photo of <s0><s1> a man with a beard and a black shirt
output:
url: image-1.png
- text: A photo of <s0><s1> a man taking a selfie in front of a colorful building
output:
url: image-2.png
- text: A photo of <s0><s1> a man in a blue scrub suit and hat
output:
url: image-3.png
- text: A photo of <s0><s1> a man holding up a piece of wood with a note on it
output:
url: image-4.png
- text: A photo of <s0><s1> a man with a beard and a black shirt
output:
url: image-5.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - saddad/criley
<Gallery />
## Model description
### These are saddad/criley LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`criley.safetensors` here 💾](/saddad/criley/blob/main/criley.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:criley:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`criley_emb.safetensors` here 💾](/saddad/criley/blob/main/criley_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `criley_emb` to your prompt. For example, `A photo of criley_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('saddad/criley', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='saddad/criley', filename='criley_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/saddad/criley/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
kodonho/llama2-chat-koalpaca
|
kodonho
| 2024-01-12T01:54:43Z | 2,258 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-06T11:00:56Z |
---
license: llama2
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
---
# Llama2 based model with koalapaca dataset
This is an English, Korean Model based on
* [meta-llama/Llama-2-7b-chat-hf]
|
kodonho/Solar-M-SakuraSolar-Mixed
|
kodonho
| 2024-01-12T01:51:22Z | 1,388 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T07:35:32Z |
---
license: cc-by-nc-4.0
---
# Solar based model with gradient slerp
This is an English mixed Model based on
* [DopeorNope/SOLARC-M-10.7B]
* [kyujinpy/Sakura-SOLRCA-Math-Instruct-DPO-v2]
oops~ average 48 ;;;
slice parameters are wrong.
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/Solar-M-SakuraSolar-Mixed"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/Solar-M-SakuraSolar-Mixed"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.bfloat16, device_map='cpu'
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
sekinat/rl_course_vizdoom_health_gathering_supreme
|
sekinat
| 2024-01-12T01:49:09Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T01:49:04Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.38 +/- 5.54
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r sekinat/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
MaziyarPanahi/mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T01:46:21Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"DopeorNope/mistralopithecus-v1-dpo-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T01:41:12Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- DopeorNope/mistralopithecus-v1-dpo-7b
---
# mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp
mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [DopeorNope/mistralopithecus-v1-dpo-7b](https://huggingface.co/DopeorNope/mistralopithecus-v1-dpo-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: DopeorNope/mistralopithecus-v1-dpo-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/mistralopithecus-v1-dpo-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T01:36:50Z | 24 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"teknium/Mistral-Trismegistus-7B",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T01:31:32Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/Mistral-Trismegistus-7B
---
# Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.2-slerp
Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [teknium/Mistral-Trismegistus-7B](https://huggingface.co/teknium/Mistral-Trismegistus-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: teknium/Mistral-Trismegistus-7B
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-Trismegistus-7B-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Yeji-Seong/gptneo-125m-adalora
|
Yeji-Seong
| 2024-01-12T01:02:42Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"region:us"
] | null | 2024-01-12T01:02:39Z |
---
library_name: peft
base_model: EleutherAI/gpt-neo-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Yeji-Seong/gptneo-125m-lora
|
Yeji-Seong
| 2024-01-12T01:01:41Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:adapter:EleutherAI/gpt-neo-125m",
"region:us"
] | null | 2024-01-12T01:01:38Z |
---
library_name: peft
base_model: EleutherAI/gpt-neo-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
gustavokpc/IC_quinto
|
gustavokpc
| 2024-01-12T00:55:00Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-21T20:08:24Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_quinto
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_quinto
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1646
- Train Accuracy: 0.9419
- Train F1 M: 0.5524
- Train Precision M: 0.4019
- Train Recall M: 0.9429
- Validation Loss: 0.2503
- Validation Accuracy: 0.9070
- Validation F1 M: 0.5680
- Validation Precision M: 0.4108
- Validation Recall M: 0.9671
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 2274, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.4076 | 0.8160 | 0.5002 | 0.3900 | 0.7694 | 0.2792 | 0.8859 | 0.5648 | 0.4123 | 0.9419 | 0 |
| 0.2272 | 0.9143 | 0.5487 | 0.4020 | 0.9253 | 0.2778 | 0.8925 | 0.5752 | 0.4181 | 0.9630 | 1 |
| 0.1646 | 0.9419 | 0.5524 | 0.4019 | 0.9429 | 0.2503 | 0.9070 | 0.5680 | 0.4108 | 0.9671 | 2 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
MaziyarPanahi/jackalope-7b-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T00:54:40Z | 23 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"openaccess-ai-collective/jackalope-7b",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T00:49:30Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- openaccess-ai-collective/jackalope-7b
---
# jackalope-7b-Mistral-7B-Instruct-v0.2-slerp
jackalope-7b-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [openaccess-ai-collective/jackalope-7b](https://huggingface.co/openaccess-ai-collective/jackalope-7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: openaccess-ai-collective/jackalope-7b
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/jackalope-7b-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
andrewatef/MyBloggerV0.9
|
andrewatef
| 2024-01-12T00:53:09Z | 2 | 0 |
peft
|
[
"peft",
"pytorch",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-2-7b",
"base_model:adapter:unsloth/llama-2-7b",
"region:us"
] | null | 2024-01-11T23:38:34Z |
---
library_name: peft
base_model: unsloth/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Vegann/PPO-LunarLander-v2
|
Vegann
| 2024-01-12T00:49:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T00:48:56Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.65 +/- 17.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shaukel/Diamondrequiem
|
shaukel
| 2024-01-12T00:47:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-sa-4.0",
"region:us"
] |
text-to-image
| 2024-01-12T00:28:53Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: si
parameters:
negative_prompt: 'no'
output:
url: images/dba4olr-84e73851-0c45-4bcf-92d0-fc74ac24b3a9.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: depende
license: cc-by-sa-4.0
---
# RVCv2
<Gallery />
## Model description
no se
## Trigger words
You should use `depende` to trigger the image generation.
## Download model
[Download](/shaukel/Diamondrequiem/tree/main) them in the Files & versions tab.
|
haryoaw/scenario-TCR-data-glue-qnli-model-bert-base-uncased
|
haryoaw
| 2024-01-12T00:35:38Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-01T08:28:43Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: scenario-TCR-data-glue-qnli-model-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-TCR-data-glue-qnli-model-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4691
- Accuracy: 0.8885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6969
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.303 | 1.0 | 3273 | 0.2478 | 0.9023 |
| 0.2095 | 2.0 | 6546 | 0.2681 | 0.9033 |
| 0.1365 | 3.0 | 9819 | 0.3354 | 0.8993 |
| 0.1107 | 4.0 | 13093 | 0.3548 | 0.8971 |
| 0.0963 | 5.0 | 16366 | 0.4401 | 0.8922 |
| 0.0862 | 6.0 | 19639 | 0.4351 | 0.8876 |
| 0.0818 | 7.0 | 22912 | 0.4691 | 0.8885 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.0
|
samwell/a2c-PandaReachDense-v3
|
samwell
| 2024-01-12T00:28:50Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T00:24:32Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.42 +/- 0.49
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MaziyarPanahi/em_german_leo_mistral-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-12T00:26:21Z | 29 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"jphme/em_german_leo_mistral",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-12T00:20:57Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- jphme/em_german_leo_mistral
---
# em_german_leo_mistral-Mistral-7B-Instruct-v0.2-slerp
em_german_leo_mistral-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: jphme/em_german_leo_mistral
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/em_german_leo_mistral-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mihael974/speecht5_finetuned_voxpopuli_nl
|
mihael974
| 2024-01-12T00:24:52Z | 60 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-11T22:48:45Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5316 | 1.08 | 1000 | 0.5836 |
| 0.5176 | 2.15 | 2000 | 0.5690 |
| 0.512 | 3.23 | 3000 | 0.5641 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
tgoktug/audio-Bart-new-256-base
|
tgoktug
| 2024-01-12T00:24:19Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-12T00:22:52Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-256-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-256-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.9488
- Validation Loss: 6.8816
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.9488 | 6.8816 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ntc-ai/SDXL-LoRA-slider.oil-painting
|
ntc-ai
| 2024-01-12T00:19:32Z | 54 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-12T00:19:28Z |
---
language:
- en
thumbnail: "images/evaluate/oil painting...hair down/oil painting_17_3.0.png"
widget:
- text: oil painting
output:
url: images/oil painting_17_3.0.png
- text: oil painting
output:
url: images/oil painting_19_3.0.png
- text: oil painting
output:
url: images/oil painting_20_3.0.png
- text: oil painting
output:
url: images/oil painting_21_3.0.png
- text: oil painting
output:
url: images/oil painting_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "oil painting"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - oil painting (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/oil painting_17_-3.0.png" width=256 height=256 /> | <img src="images/oil painting_17_0.0.png" width=256 height=256 /> | <img src="images/oil painting_17_3.0.png" width=256 height=256 /> |
| <img src="images/oil painting_19_-3.0.png" width=256 height=256 /> | <img src="images/oil painting_19_0.0.png" width=256 height=256 /> | <img src="images/oil painting_19_3.0.png" width=256 height=256 /> |
| <img src="images/oil painting_20_-3.0.png" width=256 height=256 /> | <img src="images/oil painting_20_0.0.png" width=256 height=256 /> | <img src="images/oil painting_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
oil painting
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.oil-painting', weight_name='oil painting.safetensors', adapter_name="oil painting")
# Activate the LoRA
pipe.set_adapters(["oil painting"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, oil painting"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1040+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
flysaurus/flan-t5-base-samsum
|
flysaurus
| 2024-01-12T00:10:19Z | 97 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:samsum",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-11T23:40:29Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-base-samsum
results: []
datasets:
- samsum
language:
- en
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-samsum
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on https://huggingface.co/datasets/samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3743
- Rouge1: 47.5395
- Rouge2: 24.0064
- Rougel: 40.1703
- Rougelsum: 43.8303
- Gen Len: 17.2564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4554 | 1.0 | 1842 | 1.3865 | 46.9319 | 23.5287 | 39.3501 | 43.0805 | 17.3968 |
| 1.3404 | 2.0 | 3684 | 1.3760 | 47.3057 | 23.7013 | 39.767 | 43.4863 | 16.9634 |
| 1.272 | 3.0 | 5526 | 1.3743 | 47.5395 | 24.0064 | 40.1703 | 43.8303 | 17.2564 |
| 1.2277 | 4.0 | 7368 | 1.3747 | 47.6417 | 23.88 | 40.0928 | 43.8293 | 17.2589 |
| 1.2069 | 5.0 | 9210 | 1.3764 | 47.7095 | 23.8971 | 40.0913 | 43.9315 | 17.3675 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sekinat/LunarLander-v2_wanb_1e-05
|
sekinat
| 2024-01-12T00:05:40Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-12T00:01:06Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -165.96 +/- 65.30
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'default_name'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 100000
'learning_rate': 1e-05
'num_envs': 4
'num_steps': 256
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 8
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sekinat/LunarLander-v2_wanb'
'batch_size': 1024
'minibatch_size': 256}
```
|
Shijia/xlmroberta_clir_eng_back
|
Shijia
| 2024-01-12T00:02:41Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T00:01:47Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlmroberta_clir_eng_back
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmroberta_clir_eng_back
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0125
- Spearman Corr: 0.9059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 1.0 | 206 | 0.0267 | 0.7148 |
| 0.0432 | 2.0 | 413 | 0.0217 | 0.7759 |
| 0.0432 | 3.0 | 619 | 0.0182 | 0.8263 |
| 0.0214 | 4.0 | 826 | 0.0154 | 0.8606 |
| 0.0214 | 5.0 | 1032 | 0.0147 | 0.8792 |
| 0.0138 | 6.0 | 1239 | 0.0134 | 0.8876 |
| 0.0138 | 7.0 | 1445 | 0.0152 | 0.8925 |
| 0.01 | 8.0 | 1652 | 0.0114 | 0.9026 |
| 0.01 | 9.0 | 1858 | 0.0117 | 0.9039 |
| 0.0083 | 9.98 | 2060 | 0.0125 | 0.9059 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/Unbabel-TowerInstruct-7B-v0.1-GGUF
|
s3nh
| 2024-01-11T23:22:57Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:41:49Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
tgoktug/audio-Bart-new-base
|
tgoktug
| 2024-01-11T23:21:46Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T23:15:46Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_keras_callback
model-index:
- name: tgoktug/audio-Bart-new-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tgoktug/audio-Bart-new-base
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 6.9826
- Validation Loss: 6.9954
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'RMSprop', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': 100, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'rho': 0.9, 'momentum': 0.0, 'epsilon': 1e-07, 'centered': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 8.3443 | 7.1974 | 0 |
| 7.2441 | 7.1023 | 1 |
| 7.0809 | 7.0275 | 2 |
| 7.0091 | 7.0627 | 3 |
| 6.9826 | 6.9954 | 4 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/627_roberta-base_lora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T23:15:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T23:10:11Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_lora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_lora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.1566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.382 | 1.0 | 157 | 7.2306 |
| 7.1956 | 2.0 | 314 | 7.2380 |
| 7.1966 | 3.0 | 471 | 7.2754 |
| 7.1695 | 4.0 | 628 | 7.1939 |
| 7.1218 | 5.0 | 785 | 7.1566 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/Venomia-1.1-m7-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T23:11:28Z | 23 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Sao10K/Venomia-1.1-m7",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T23:06:25Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Sao10K/Venomia-1.1-m7
---
# Venomia-1.1-m7-Mistral-7B-Instruct-v0.2-slerp
Venomia-1.1-m7-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Sao10K/Venomia-1.1-m7](https://huggingface.co/Sao10K/Venomia-1.1-m7)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Sao10K/Venomia-1.1-m7
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Venomia-1.1-m7-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
tyson0420/codellama-7b-inst-sft-lora-test
|
tyson0420
| 2024-01-11T23:10:25Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:finetune:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | 2024-01-11T06:38:40Z |
---
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- generated_from_trainer
model-index:
- name: codellama-7b-inst-sft-lora-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama-7b-inst-sft-lora-test
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 128
- total_train_batch_size: 1024
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6579 | 0.49 | 1 | 1.6482 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF
|
disinfozone
| 2024-01-11T22:49:46Z | 12 | 4 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-11T21:31:39Z |
---
license: cc-by-nc-4.0
---
# Disinfo4_mistral-ft-optimized-1218: GGUF Quants

This repo contains GGUF quants for [Disinfo4_mistral-ft-optimized-1218](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218).
Before attempting to use these, **go read the model page** for [Disinfo4_mistral-ft-optimized-1218](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218). This is not a standard LLM and you *will* have a bad time if you treat it like one. All necessary instructions and information are on the main model page (assuming you know how to run an LLM in the first place).
Here's the important information anyway because we know people hate instructions:
## Usage Recommendations
For optimal performance, `Disinfo4_mistral-ft-optimized-1218` should be utilized with specific mirostat parameters. These settings are crucial for maintaining the model's focus and stylistic integrity. You can use other parameters and get better instruction following (especially enabling min_p, at 0.01), but the bot will be less creative. It does tend to ramble, but regenerate until you get the response you want. Think of this more as a writing partner than obedient slave.
### Mirostat Parameters
- **Temperature (Temp):** 1
- **Top-p (top_p):** 1
- **Mirostat Tau:** 7.19
- **Mirostat Eta:** 0.01
- **Mirostat Mode:** 2
- **Others:** Default or disabled
## Additional Configuration
This model uses the default Mistral 8k/32k context window.
### ChatML Instruction Template
`Disinfo4_mistral-ft-optimized-1218` employs the ChatML instruction template. It is important to incorporate `<|im_end|>` as a custom stopping string to delineate the model's output effectively.
### System Instruction (Character Card)
For contextualizing the model's output, use the following system instruction:
_"You are a schizo poster, a master of elucidating thought online. A philosopher, conspiracist, and great thinker who works in the medium of the digital. Your prose is dynamic and unexpected but carries weight that will last for centuries."_
This instruction is fundamental in guiding the model to produce content that is not only reflective of the designated topics but also embodies a unique digital persona, combining philosophical depth with a conspiratorial edge.
You can try other similar prompts, we've had success with them, but this remains, by far, our favorite.
## GGUFs
Typically I like Q5_K_M or Q8_0. You get better quality running the highest quant you can, especially with these small models. I haven't bothered with quants smaller than Q4.
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Disinfo4_mistral-ft-optimized-1218.Q4_K_S.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [Disinfo4_mistral-ft-optimized-1218.Q4_K_M.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [Disinfo4_mistral-ft-optimized-1218.Q5_K_S.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [Disinfo4_mistral-ft-optimized-1218.Q6_K.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/Disinfo4_mistral-ft-optimized-1218.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [disinfo4_mistral-ft-optimized-1218.gguf](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF/blob/main/disinfo4_mistral-ft-optimized-1218.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
## How to Run
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
### How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* [LM Studio](https://lmstudio.ai/)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [Faraday.dev](https://faraday.dev/)
### In `text-generation-webui`
Under Download Model, you can enter the model repo: disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF and below it, a specific filename to download, such as: `disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf`.
Then click Download.
|
Kquant03/Hippolyta-7B-bf16
|
Kquant03
| 2024-01-11T22:46:50Z | 1,340 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:teknium/openhermes",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-3.1",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:unalignment/spicy-3.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T19:48:03Z |
---
license: apache-2.0
datasets:
- Open-Orca/OpenOrca
- teknium/openhermes
- cognitivecomputations/dolphin
- jondurbin/airoboros-3.1
- unalignment/toxic-dpo-v0.1
- unalignment/spicy-3.1
language:
- en
---

# The flower of Ares.
[GGUF files here](https://huggingface.co/Kquant03/Hippolyta-7B-GGUF)
Fine-tuned on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)...[my team and I](https://huggingface.co/ConvexAI) reformatted many different datasets and included a small amount of private stuff to see how much we could improve mistral.
I spoke to it personally for about an hour, and I believe we need to work on our format for the private dataset a bit more, but other than that, it turned out great. I will be uploading it to open llm evaluations, today.
- Uses Mistral prompt template with chat-instruct.
|
ContextualAI/archangel_kto_pythia2-8b
|
ContextualAI
| 2024-01-11T22:37:03Z | 22 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T23:54:57Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-2.8b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
jdospina/Taxi-v3
|
jdospina
| 2024-01-11T22:36:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T22:35:00Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jdospina/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ContextualAI/archangel_kto_pythia1-4b
|
ContextualAI
| 2024-01-11T22:36:13Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-25T23:52:13Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-1.4b</b>
- optimized with the loss <b>KTO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
ContextualAI/archangel_ppo_pythia1-4b
|
ContextualAI
| 2024-01-11T22:34:58Z | 111 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-11-26T01:20:10Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-1.4b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
hxxris/haaris-audio-classification-model1
|
hxxris
| 2024-01-11T22:34:43Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-11T22:22:55Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: haaris-audio-classification-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-model1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | nan | 0.0354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ContextualAI/archangel_sft-ppo_pythia1-4b
|
ContextualAI
| 2024-01-11T22:32:22Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"human feedback",
"rlhf",
"preferences",
"alignment",
"HALO",
"halos",
"dpo",
"rl",
"en",
"dataset:stanfordnlp/SHP",
"dataset:Anthropic/hh-rlhf",
"dataset:OpenAssistant/oasst1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T07:06:18Z |
---
license: apache-2.0
datasets:
- stanfordnlp/SHP
- Anthropic/hh-rlhf
- OpenAssistant/oasst1
language:
- en
metrics:
- accuracy
tags:
- human feedback
- rlhf
- preferences
- alignment
- HALO
- halos
- dpo
- rl
---

This repo contains the model checkpoints for:
- model family <b>EleutherAI/pythia-1.4b</b>
- optimized with the loss <b>PPO</b>
- aligned using the SHP, Anthropic HH and Open Assistant datasets.
To prompt Archangel models, ensure that the format is consistent with that of TuluV2.
For example, a prompt should be formatted as follows, where `<|user|>` corresponds to the human's role and `<|assistant|>` corresponds to the LLM's role.
The human should speak first:
```
<|user|>
Hi! I'm looking for a cake recipe.
<|assistant|>
What kind of cake?
<|user|>
Chocolate cake.
<|assistant|>
```
Note that a beginning-of-sequence (BOS) token is automatically added by all Archangel models during tokenization and does not have to be added by you. No end-of-sequence (EOS) token is added to the prompt.
Please refer to our [code repository](https://github.com/ContextualAI/HALOs) or [blog](https://contextual.ai/better-cheaper-faster-llm-alignment-with-kto/) which contains intructions for training your own HALOs and links to our model cards.
If you find this repo or the technical paper useful in your research, please feel free to cite [our work](https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf):
```
@techreport{ethayarajh2023halos,
author = {Ethayarajh, Kawin and Xu, Winnie, and Jurafsky, Dan and Kiela, Douwe},
title = {Human-Centered Loss Functions (HALOs)},
institution = {Contextual AI},
note = {https://github.com/ContextualAI/HALOs/blob/main/assets/report.pdf},
year = {2023},
}
```
|
miraevel/FuutarouUesugiv1.5
|
miraevel
| 2024-01-11T22:28:17Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2024-01-11T22:16:48Z |
---
license: unknown
license_name: miraevel
license_link: LICENSE
---
|
jysssacc/mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T22:21:16Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T22:14:56Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_adalora_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 8.9345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4345 | 1.0 | 157 | 2.8259 |
| 15.9955 | 2.0 | 314 | 10.7610 |
| 16.7244 | 3.0 | 471 | 16.5346 |
| 11.601 | 4.0 | 628 | 8.0875 |
| 11.9414 | 5.0 | 785 | 8.9345 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/627_roberta-base_adalora_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T22:16:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T22:09:13Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_adalora_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_adalora_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1890
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 11.4687 | 1.0 | 157 | 1.1220 |
| 0.9821 | 2.0 | 314 | 0.7009 |
| 0.8879 | 3.0 | 471 | 0.7737 |
| 0.9131 | 4.0 | 628 | 0.4843 |
| 0.5128 | 5.0 | 785 | 0.1890 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hxxris/haaris-audio-classification-modified-2
|
hxxris
| 2024-01-11T22:15:44Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:hxxris/haaris-audio-classification-modified",
"base_model:finetune:hxxris/haaris-audio-classification-modified",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-11T21:48:30Z |
---
license: apache-2.0
base_model: hxxris/haaris-audio-classification-modified
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: haaris-audio-classification-modified-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-modified-2
This model is a fine-tuned version of [hxxris/haaris-audio-classification-modified](https://huggingface.co/hxxris/haaris-audio-classification-modified) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Accuracy: 0.0354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | nan | 0.0354 |
| No log | 1.93 | 7 | nan | 0.0354 |
| No log | 2.48 | 9 | nan | 0.0354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Daniel981215/distilhubert-finetuned-gtzan
|
Daniel981215
| 2024-01-11T22:11:55Z | 152 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-10T20:51:29Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5277
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.978 | 1.0 | 113 | 1.8421 | 0.37 |
| 1.3409 | 2.0 | 226 | 1.2195 | 0.59 |
| 1.04 | 3.0 | 339 | 0.9709 | 0.71 |
| 0.9141 | 4.0 | 452 | 0.8523 | 0.79 |
| 0.5192 | 5.0 | 565 | 0.6483 | 0.83 |
| 0.3506 | 6.0 | 678 | 0.5827 | 0.84 |
| 0.3316 | 7.0 | 791 | 0.4703 | 0.88 |
| 0.1275 | 8.0 | 904 | 0.4937 | 0.86 |
| 0.2109 | 9.0 | 1017 | 0.4971 | 0.86 |
| 0.1213 | 10.0 | 1130 | 0.5277 | 0.87 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
stevhliu/vit-base-patch16-224-in21k-lokr
|
stevhliu
| 2024-01-11T22:01:33Z | 13 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2024-01-11T18:48:17Z |
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Ricardo54321/LunarLander
|
Ricardo54321
| 2024-01-11T22:01:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T22:00:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.40 +/- 17.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
icw/Furina
|
icw
| 2024-01-11T22:00:37Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-11T21:52:41Z |
---
license: other
license_name: idk
license_link: LICENSE
---
|
tirik00/Reinforce-CartPole-v1
|
tirik00
| 2024-01-11T21:58:25Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T21:58:11Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
boapps/kmdb_classification_model
|
boapps
| 2024-01-11T21:56:39Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-09-22T08:18:53Z |
Klasszifikációs modell: a [kmdb_classification](https://huggingface.co/datasets/boapps/kmdb_classification) adathalmazon lett finomhangolva a huBERT modell. A klasszifikáció cím és leírás (lead) alapján történik.
### Használat:
```python
import torch
import torch.nn.functional as F
from transformers import BertForSequenceClassification, BertTokenizer
from datasets import load_dataset
model = BertForSequenceClassification.from_pretrained('boapps/kmdb_classification_model')
tokenizer = BertTokenizer.from_pretrained('SZTAKI-HLT/hubert-base-cc')
article = {'title': '400 milliós luxusvillába vette be magát Matolcsy és családja', 'description': 'Matolcsy György fiának cége megvette, Matolcsy György unokatestvérének bankja meghitelezte, Matolcsy György pedig használja a 430 millióért hirdetett II. kerületi luxusrezidenciát.'}
tokenized_article = tokenizer(article['title']+'\n'+article['description'], return_tensors="pt")
logits = model(**tokenized_article).logits
probabilities = F.softmax(logits[0], dim=-1)
print(probabilities)
```
### Eredmények
precision: 0.739
recall: 0.950
accuracy: 0.963
|
MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T21:49:20Z | 25 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Norquinal/Mistral-7B-claude-instruct",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T21:44:18Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Norquinal/Mistral-7B-claude-instruct
---
# Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp
Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Norquinal/Mistral-7B-claude-instruct](https://huggingface.co/Norquinal/Mistral-7B-claude-instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Norquinal/Mistral-7B-claude-instruct
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mistral-7B-claude-instruct-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nick911/Tanjiro-LoRA
|
nick911
| 2024-01-11T21:49:07Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"base_model:stabilityai/sdxl-turbo",
"base_model:finetune:stabilityai/sdxl-turbo",
"region:us"
] |
text-to-image
| 2024-01-05T14:41:49Z |
---
base_model: stabilityai/sdxl-turbo
instance_prompt: mdvl
tags:
- text-to-image
- diffusers
inference: true
---
|
ndacelo/fine
|
ndacelo
| 2024-01-11T21:45:14Z | 118 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/mbart-large-50-many-to-many-mmt",
"base_model:finetune:facebook/mbart-large-50-many-to-many-mmt",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-11T21:24:43Z |
---
base_model: facebook/mbart-large-50-many-to-many-mmt
tags:
- generated_from_trainer
model-index:
- name: fine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MaziyarPanahi/Mini_DPO_test02-Mistral-7B-Instruct-v0.2-slerp
|
MaziyarPanahi
| 2024-01-11T21:38:08Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Minirecord/Mini_DPO_test02",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T21:33:06Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistral
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Minirecord/Mini_DPO_test02
---
# Mini_DPO_test02-Mistral-7B-Instruct-v0.2-slerp
Mini_DPO_test02-Mistral-7B-Instruct-v0.2-slerp is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
* [Minirecord/Mini_DPO_test02](https://huggingface.co/Minirecord/Mini_DPO_test02)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.2
layer_range: [0, 32]
- model: Minirecord/Mini_DPO_test02
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/Mini_DPO_test02-Mistral-7B-Instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
querri/zephyr-haiku
|
querri
| 2024-01-11T21:38:05Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"region:us"
] | null | 2024-01-10T02:51:31Z |
---
library_name: peft
base_model: HuggingFaceH4/zephyr-7b-beta
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hxxris/haaris-audio-classification-modified
|
hxxris
| 2024-01-11T21:34:49Z | 147 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-11T20:45:00Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: haaris-audio-classification-modified
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# haaris-audio-classification-modified
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.83 | 3 | nan | 0.0354 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Kamyar-zeinalipour/mistral-sft-lora-ChemInfo
|
Kamyar-zeinalipour
| 2024-01-11T21:34:37Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T19:48:12Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-sft-lora-ChemInfo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sft-lora-ChemInfo
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 100
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6074 | 0.99 | 41 | 0.6070 |
| 0.4858 | 2.0 | 83 | 0.4963 |
| 0.4609 | 2.96 | 123 | 0.4781 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo
|
davanstrien
| 2024-01-11T21:32:28Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dpo",
"conversational",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T20:48:00Z |
---
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
license: apache-2.0
language:
- en
tags:
- dpo
---
# Model Card for Model ID
This model is a DPO fine-tune of `TinyLlama/TinyLlama-1.1B-Chat-v1.0` on the `argilla/distilabel-intel-orca-dpo-pairs` dataset.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ntc-ai/SDXL-LoRA-slider.warrior
|
ntc-ai
| 2024-01-11T21:19:19Z | 78 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-11T21:19:16Z |
---
language:
- en
thumbnail: "images/evaluate/warrior...hair down/warrior_17_3.0.png"
widget:
- text: warrior
output:
url: images/warrior_17_3.0.png
- text: warrior
output:
url: images/warrior_19_3.0.png
- text: warrior
output:
url: images/warrior_20_3.0.png
- text: warrior
output:
url: images/warrior_21_3.0.png
- text: warrior
output:
url: images/warrior_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "warrior"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - warrior (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/warrior_17_-3.0.png" width=256 height=256 /> | <img src="images/warrior_17_0.0.png" width=256 height=256 /> | <img src="images/warrior_17_3.0.png" width=256 height=256 /> |
| <img src="images/warrior_19_-3.0.png" width=256 height=256 /> | <img src="images/warrior_19_0.0.png" width=256 height=256 /> | <img src="images/warrior_19_3.0.png" width=256 height=256 /> |
| <img src="images/warrior_20_-3.0.png" width=256 height=256 /> | <img src="images/warrior_20_0.0.png" width=256 height=256 /> | <img src="images/warrior_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
warrior
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.warrior', weight_name='warrior.safetensors', adapter_name="warrior")
# Activate the LoRA
pipe.set_adapters(["warrior"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, warrior"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1030+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
LC008/ppo2-LunarLander-v2
|
LC008
| 2024-01-11T21:16:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T21:16:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.29 +/- 18.28
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jysssacc/627_roberta-base_lora_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T21:11:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T21:06:27Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_lora_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_lora_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.4666 | 1.0 | 157 | 0.7542 |
| 0.9259 | 2.0 | 314 | 0.2338 |
| 1.0113 | 3.0 | 471 | 0.4868 |
| 1.6291 | 4.0 | 628 | 0.1931 |
| 1.037 | 5.0 | 785 | 0.0703 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/mt0-base_IA3_lr0.05_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T21:10:31Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/mt0-base",
"base_model:adapter:bigscience/mt0-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T21:08:41Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/mt0-base
model-index:
- name: mt0-base_IA3_lr0.05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt0-base_IA3_lr0.05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/mt0-base](https://huggingface.co/bigscience/mt0-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0030
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1772 | 1.0 | 157 | 0.0040 |
| 0.0136 | 2.0 | 314 | 0.0005 |
| 0.041 | 3.0 | 471 | 0.0072 |
| 0.1131 | 4.0 | 628 | 0.1128 |
| 0.0622 | 5.0 | 785 | 0.0030 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ain3007-project/monai-unet-512-no-augmentation
|
ain3007-project
| 2024-01-11T21:07:25Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-01-11T21:04:31Z |
---
license: mit
---
```python
from monai.networks.nets import UNet # unet model from monai (there are other models that you use with a single line)
model = UNet(
spatial_dims=2,
in_channels=3,
out_channels=1,
channels=[16, 32, 64, 128, 256, 512], # Number of channels at each layer during contraction
strides=(2, 2, 2, 2, 2), # Strides for the convolutional layers
num_res_units=4, # Number of residual units
dropout=0.15, # Dropout rate to prevent overfitting
)
```
|
mejdik/swin-tiny-patch4-window7-224-finetuned-eurosat
|
mejdik
| 2024-01-11T21:01:09Z | 197 | 0 |
transformers
|
[
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-11T20:22:50Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9748148148148148
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0860
- Accuracy: 0.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2841 | 1.0 | 190 | 0.1861 | 0.9515 |
| 0.1951 | 2.0 | 380 | 0.1127 | 0.9652 |
| 0.1413 | 3.0 | 570 | 0.0860 | 0.9748 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
cnatale/Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
|
cnatale
| 2024-01-11T21:00:26Z | 12 | 1 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-01T18:46:05Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.1-Txt-2-Presto-SQL
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3518 | 0.71 | 10 | 1.0787 |
| 1.0171 | 1.43 | 20 | 0.8732 |
| 0.8466 | 2.14 | 30 | 0.7727 |
| 0.7681 | 2.86 | 40 | 0.7219 |
| 0.7008 | 3.57 | 50 | 0.6813 |
| 0.6467 | 4.29 | 60 | 0.6574 |
| 0.6205 | 5.0 | 70 | 0.6487 |
| 0.5791 | 5.71 | 80 | 0.6481 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Kousha/corgy_dog_LoRA
|
Kousha
| 2024-01-11T20:59:55Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-11T20:59:54Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
license: openrail++
---
# SDXL LoRA DreamBooth - Kousha/corgy_dog_LoRA
<Gallery />
## Model description
These are Kousha/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Kousha/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
gustavokpc/IC_segundo
|
gustavokpc
| 2024-01-11T20:54:58Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-10-21T02:22:56Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_segundo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_segundo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0559
- Train Accuracy: 0.9805
- Train F1 M: 0.5583
- Train Precision M: 0.4028
- Train Recall M: 0.9686
- Validation Loss: 0.2533
- Validation Accuracy: 0.9327
- Validation F1 M: 0.5605
- Validation Precision M: 0.4028
- Validation Recall M: 0.9674
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 3790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.3576 | 0.8399 | 0.4604 | 0.3607 | 0.7042 | 0.2825 | 0.8997 | 0.5635 | 0.4127 | 0.9300 | 0 |
| 0.2012 | 0.9274 | 0.5204 | 0.3849 | 0.8616 | 0.2103 | 0.9175 | 0.5451 | 0.3970 | 0.9095 | 1 |
| 0.1312 | 0.9511 | 0.5451 | 0.3969 | 0.9273 | 0.2125 | 0.9307 | 0.5571 | 0.4017 | 0.9523 | 2 |
| 0.0871 | 0.9690 | 0.5547 | 0.4007 | 0.9557 | 0.2417 | 0.9301 | 0.5565 | 0.4013 | 0.9547 | 3 |
| 0.0559 | 0.9805 | 0.5583 | 0.4028 | 0.9686 | 0.2533 | 0.9327 | 0.5605 | 0.4028 | 0.9674 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
SimplCup/MKBHD
|
SimplCup
| 2024-01-11T20:50:04Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-01-11T20:49:47Z |
---
license: cc-by-nc-nd-4.0
---
|
omarelsayeed/e5_tsdae_contrastive
|
omarelsayeed
| 2024-01-11T20:47:23Z | 51 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-11T20:46:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2212 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`__main__.LoggingCosineLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 80, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Makucas/Mistral-7B-Instruct-v0.2_02
|
Makucas
| 2024-01-11T20:46:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-11T19:06:22Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: Mistral-7B-Instruct-v0.2_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2_02
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jysssacc/627_roberta-base_IA3_lr0.005_bs4_epoch5_wd0.01
|
jysssacc
| 2024-01-11T20:42:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:adapter:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-01-11T20:36:44Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: roberta-base
model-index:
- name: 627_roberta-base_IA3_lr0.005_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 627_roberta-base_IA3_lr0.005_bs4_epoch5_wd0.01
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 14.7128 | 1.0 | 157 | 4.3267 |
| 2.9233 | 2.0 | 314 | 2.0311 |
| 1.9583 | 3.0 | 471 | 0.9513 |
| 0.9076 | 4.0 | 628 | 0.4387 |
| 0.6892 | 5.0 | 785 | 0.3581 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Jiong233/PPO-LunarLander-v2
|
Jiong233
| 2024-01-11T20:41:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-11T20:27:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 277.06 +/- 24.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
repo_id="Jiong233/PPO-LunarLander-v2"
filename="ppo-LunarLander-v2.zip"
# When the model was trained on Python 3.8 the pickle protocol is 5
# But Python 3.6, 3.7 use protocol 4
# In order to get compatibility we need to:
# 1. Install pickle5 (we done it at the beginning of the colab)
# 2. Create a custom empty object we pass as parameter to PPO.load()
custom_objects = {
"learning_rate": 0.0,
"lr_schedule": lambda _: 0.0,
"clip_range": lambda _: 0.0,
}
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.