modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-12 00:41:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-12 00:40:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e0_s6789_v3_l6_v100
|
KingKazma
| 2023-08-12T11:00:16Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T11:00:15Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
josephamess/llama-2-7b-MultiChoiceFineTuned-rag
|
josephamess
| 2023-08-12T10:58:05Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T10:57:59Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0.dev0
|
foduucom/plant-leaf-detection-and-classification
|
foduucom
| 2023-08-12T10:55:32Z | 1,987 | 24 |
ultralytics
|
[
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"awesome-yolov8-models",
"plant classification",
"plant detection",
"leaf classification",
"leaf detection",
"crop detection",
"disease detection",
"en",
"model-index",
"region:us"
] |
object-detection
| 2023-08-12T10:45:05Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
- plant classification
- plant detection
- leaf classification
- leaf detection
- crop detection
- disease detection
library_name: ultralytics
library_version: 8.0.43
inference: False
model-index:
- name: foduucom/plant-leaf-detection-and-classification
results:
- task:
type: object-detection
metrics:
- type: precision
value: 0.946
name: mAP@0.5(box)
language:
- en
metrics:
- accuracy
---
Below is the Model Card for the YOLOv8s Leaf Detection and Classification model:
---
<p align="center">
<!-- Smaller size image -->
<img src="https://huggingface.co/foduucom/plant-leaf-detection-and-classification/resolve/main/thumbnail.jpg" alt="Image" style="width:500px; height:300px;">
</p>
# Model Card for YOLOv8s Leaf Detection and Classification
## Model Summary
The YOLOv8s Leaf Detection and Classification model is an object detection model based on the YOLO (You Only Look Once) framework. It is designed to detect and classify various types of leaves in images. The model has achieved a precision (mAP@0.5) of 0.946 on the object detection task.
## Model Details
### Model Description
The YOLOv8s Leaf Detection and Classification model is built on the YOLOv8 architecture, which is known for its real-time object detection capabilities. This specific model has been trained to recognize and classify different types of leaves from various plant species. It can detect multiple leaf instances in an image and assign them to their respective classes.
```
['ginger', 'banana', 'tobacco', 'ornamaental', 'rose', 'soyabean', 'papaya', 'garlic', 'raspberry', 'mango', 'cotton', 'corn', 'pomgernate', 'strawberry', 'Blueberry', 'brinjal', 'potato', 'wheat', 'olive', 'rice', 'lemon', 'cabbage', 'gauava', 'chilli', 'capcicum', 'sunflower', 'cherry', 'cassava', 'apple', 'tea', 'sugarcane', 'groundnut', 'weed', 'peach', 'coffee', 'cauliflower', 'tomato', 'onion', 'gram', 'chiku', 'jamun', 'castor', 'pea', 'cucumber', 'grape', 'cardamom']
```
- **Developed by:** FODUU AI
- **Model type:** Object Detection
- **Language(s) (NLP):** English
Furthermore, the YOLOv8s Leaf Detection and Classification model encourages user collaboration by allowing them to contribute their own plant leaf data. Users can submit images of new plant species, and suggest plant names for classification. Our team will diligently work to incorporate these new plant classes into the model, enhancing its ability to identify and classify an even wider variety of plant leaves.
Users are invited to actively participate in expanding the YOLOv8s Leaf Detection and Classification model's capabilities by sharing their plant names and corresponding dataset links through our community platform or by emailing the information to info@foduu.com. Your contributions will play a crucial role in enriching the model's knowledge and recognition of diverse plant species.
## Uses
### Direct Use
The YOLOv8s Leaf Detection and Classification model can be used directly for object detection tasks related to leaf detection and classification. It does not require fine-tuning or integration into a larger ecosystem or application.
### Downstream Use
The model can also be fine-tuned for specific leaf detection and classification tasks or integrated into a larger application for plant-related research, agriculture, or environmental monitoring.
### Out-of-Scope Use
The model is not designed for unrelated tasks or object detection scenarios outside the scope of leaf detection and classification.
## Bias, Risks, and Limitations
The YOLOv8s Leaf Detection and Classification model may have some limitations and biases:
- The model's performance may vary depending on the quality and diversity of the training data.
- It may struggle with detecting leaves that are heavily occluded or overlapping with other objects.
- The model's accuracy may be affected by variations in lighting conditions, image quality, and resolution.
- It may not accurately detect very small or distant leaves in images.
- The model's classification accuracy may be lower for leaf species that resemble each other closely.
- The model's biases may stem from the biases present in the training data.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Further research and experimentation are recommended to assess its performance in specific use cases and domains.
## How to Get Started with the Model
To get started with the YOLOv8s Leaf Detection and Classification model, follow these steps:
1. Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus) and [ultralytics](https://github.com/ultralytics/ultralytics) libraries using pip:
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
2. Load the model and perform prediction using the provided code snippet.
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('foduucom/plant-leaf-detection-and-classification')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'path/to/your/image'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
## Training Details
### Training Data
The model is trained on hundreds of images of 46 different plants, including both disease-infected and healthy leaves.
### Training Procedure
The training process involves using high GPU capacity and is run for up to 50 epochs, where each epoch represents a complete pass through the entire training dataset, adjusting model weights to minimize the classification loss and optimize the performance.
#### Metrics
- mAP@0.5 (box):
0.946
#### Summary
YOLOv8s is a powerful convolutional neural network tailored for leaf detection and classification of over 46 plant species. It leverages a modified CSPDarknet53 backbone, self-attention mechanism, and a feature pyramid network for accurate multi-scaled object detection, providing precise identification and classification of plant leaves.
### Model Architecture and Objective
YOLOv8 architecture utilizes a modified CSPDarknet53 as its backbone with 53 convolutional layers and cross-stage partial connections for improved information flow. The head consists of convolutional and fully connected layers for predicting bounding boxes, objectness scores, and class probabilities. It incorporates a self-attention mechanism and a feature pyramid network for multi-scaled object detection, enabling focus on relevant image features and detecting objects of different sizes.
### Compute Infrastructure
#### Hardware
NVIDIA A100 40GB GPU card
#### Software
Jupyter Notebook environment for model training.
## Model Card Contact
info@foduu.com
```bibtex
@ModelCard{
author = {Nehul Agrawal and
Pranjal singh Thakur},
title = { YOLOv8s Leaf Detection and Classification},
year = {2023}
}
```
---
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_5_e-1_s6789_v3_l6_v100
|
KingKazma
| 2023-08-12T10:51:26Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T10:51:25Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
godlikeheheda/my_awesome_qa_model
|
godlikeheheda
| 2023-08-12T10:18:03Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-08-12T10:02:28Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6889
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.4330 |
| 2.7864 | 2.0 | 500 | 1.7490 |
| 2.7864 | 3.0 | 750 | 1.6889 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Fantast/yolos-small-finetuned-for-seal
|
Fantast
| 2023-08-12T09:49:55Z | 118 | 1 |
transformers
|
[
"transformers",
"pytorch",
"yolos",
"object-detection",
"arxiv:2106.00666",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-12T09:34:46Z |
### YOLOS (small-sized) model Finetuned For Seal Detection Task
#### YOLOS model based on `hustvl/yolos-small` and fine-tuned on Our Seal Image Dataset.
#### Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss.
#### How to use
Here is how to use this model:
```
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
image = Image.open("xxxxxxxxxxxxx")
feature_extractor = YolosFeatureExtractor.from_pretrained('fantast/yolos-small-finetuned-for-seal')
model = YolosForObjectDetection.from_pretrained('fantast/yolos-small-finetuned-for-seal')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
```
# model predicts bounding boxes
```
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
#### Training data
The YOLOS model based on `hustvl/yolos-small` and fine-tuned on Our Own Seal Image Dataset, a dataset consisting of 118k/5k annotated images for training/validation respectively.
BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
---
license: mit
---
|
BADeid/Joy_Training_Part_5
|
BADeid
| 2023-08-12T09:30:55Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-12T09:22:57Z |
---
license: creativeml-openrail-m
---
|
Sneka/sneka_G
|
Sneka
| 2023-08-12T09:26:55Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"gpt2",
"region:us"
] | null | 2023-08-12T07:09:53Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
golaxy/gogpt2-13b
|
golaxy
| 2023-08-12T09:19:39Z | 1,433 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-31T13:32:31Z |
---
license: apache-2.0
datasets:
- c-s-ale/alpaca-gpt4-data-zh
language:
- zh
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-13B: 基于Llama2-13b训练的中英文增强大模型

<p align="center">
<img alt="GitHub" src="https://img.shields.io/github/license/ymcui/Chinese-LLaMA-Alpaca.svg?color=blue&style=flat-square">
<img alt="GitHub top language" src="https://img.shields.io/github/languages/top/ymcui/Chinese-LLaMA-Alpaca">
</p>
> ICT中英文底座增强大模型:70亿参数、130亿参数
🤗Huggingface上提供了GoGPT权重,目前开放了gogpt-7b和gogpt2-7b权重
| 模型名称 | 基座模型 | 模型大小 | 下载地址 |
|-------------------------------------------------------------|-----------|------|-------------------------------------------------|
| [golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) | Llama-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt-7b) |
| [golaxy/gogpt2-7b](https://huggingface.co/golaxy/gogpt2-7b) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b) |
| [golaxy/gogpt2-7b-pretrain](https://huggingface.co/golaxy/gogpt2-7b-pretrain) | Llama2-7b | 7B | [模型下载](https://huggingface.co/golaxy/gogpt2-7b-pretrain) |
| [golaxy/gogpt2-13b-pretrain](https://huggingface.co/golaxy/gogpt2-13b-pretrain) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b-pretrain) |
| [golaxy/gogpt2-13b](https://huggingface.co/golaxy/gogpt2-13b) | Llama2-13b | 13B | [模型下载](https://huggingface.co/golaxy/gogpt2-13b) |
[GoGPT-Github](https://github.com/yanqiangmiffy/GoGPT)
## 测试效果
- 怎么学习机器学习
```
好的,以下是一些学习机器学习的方法:
1. 上下线:通过实践项目来学习,例如使用开源机器学习框架(如TensorFlow、Keras、Scikit-Learn等)来实现各种机器学习任务。
2. 课程学习:选择一些在线或线下的机器学习课程,例如Coursera、Udemy、edX等,这些平台都提供了各种机器学习课程。
3. 阅读书籍:选择一些经典的机器学习书籍,例如Andrew Ng的《深度学习》、《机器学习》等,这些书籍可以帮助你深入地学习机器学习基础知识。
4. 加入社区:加入机器学习社区(如Kaggle、Github、Reddit等),与其他机器学习爱好者交流和分享经验,从而不断学习和改进。
5. 参加比赛:参加一些机器学习比赛(如Kaggle等),可以通过解决实际问题来学习并提高自己的技能。
总之,学习机器学习需要不断实践和探索,找到适合自己的学习方法。
```
- 撰写一篇交响乐音乐会评论,讨论乐团的表演和观众的整体体验。
```
交响乐音乐会是音乐爱好者和古典音乐的爱好者的梦想。上周末,我参加了一场交响乐音乐会,我的经历真的很难忘。
首先,乐团的表演非常出色。他们都是专业音乐家,他们的演奏技巧非常娴熟。他们在演奏时,展现出了音乐的真谛,让人感受到音乐的力量和魅力。同时,他们还非常出色地演绎了交响乐中的不同元素,如高音、低音、节奏、调性和变化等。他们的表演非常自然,没有过分的紧张或放松,给人一种平和的、和谐的感觉。
其次,观众的整体体验也非常不错。音乐会现场非常宽敞明亮,座椅间隔非常宽敞。观众可以自由地欣赏音乐的每一个细节,同时还可以享受到音乐的魅力。同时,音乐会现场的氛围也非常友好,观众之间相互交流,享受音乐的共品。此外,音乐会现场还设有各种美食和饮料供观众品尝,增加了观众的舒适度。
总的来说,这次交响乐音乐会是一次非常愉快的音乐体验。乐团的表演非常出色,观众的整体体验也非常不错。我真的非常期待下一场音乐会,再次领略到音乐的魅力。
```
- 为一部最近上映的科幻电影撰写一个引人入胜的影评,讨论它的情节,角色和特效。
```
《星球大战:新纪元》是一部令人兴奋的科幻电影,它延续了之前的星球大战系列的故事情节和角色,同时也带来了新的挑战和惊喜。
这部电影的情节非常引人入胜,它讲述了一群反抗军与帝国之间的斗争,反抗军试图推翻帝国的统治,而帝国则竭力维持其统治。故事情节的复杂程度和紧张程度让人惊叹,每个角色都有自己独特的动机和经历,这使得整个电影的故事更加生动。
这部电影的主要角色包括雷伊(Rey),一个年轻的女武士,以及她的导师凯诺(Kylo Ren),一名邪恶的武士。这两个角色的互动和冲突是电影中的一个重要焦点,他们的故事情节充满了神秘和悬念,使得观众期待着下一刻会发生什么。此外,电影中还有一些新角色的出现,如米斯·恩德(Maisy Endor),一个新的星球上反抗军的领导者,以及C-3PO和R2-D2等经典的机器人角色,这些角色的回归和表现也都令人兴奋。
这部电影的视觉效果也非常惊人,它使用了最新的技术和特效来营造出逼真的宇宙场景和战斗场面。电影中的星球、建筑物、机械设备和武器的视觉效果都非常逼真,而且动作场面也非常刺激。电影中的视觉效果让人感到惊奇,也使观众更加沉浸在电影中的世界。
总的来说,《星球大战:新纪元》是一部非常值得一看的科幻电影,它讲述了一个引人入胜的故事,有着精彩的角色和特效,使得整个电影令人兴奋和震撼。这部电影的成功,也预示着星球大战系列的未来。
```
- 人类平均一生眨眼多少次?请尝试解释你的答案。你的解释应该逐步带领读者理解你的推理过程
```
人类平均一生眨眼几次是5600000次左右。这个数字是由许多不同的研究结果和估算得出的。
首先,眨眼频率随年龄的增加而变化。婴儿眨眼频率比成人高,随着年龄的增长,眨眼频率逐渐下降。根据这个规律,我们可以计算出成人眨眼频率约为10次/分钟。
其次,眨眼频率还受到多种因素的影响。例如,当人们感到紧张或疲劳时,眨眼频率会增加。同样地,当人们感到愉悦或放松时,眨眼频率会降低。此外,不同的身体部位的眨眼频率也不同。例如,眼睛内侧的眨眼频率比内侧的眨眼频率高。
最后,由于人们的眼睛健康状况也会影响眨眼频率,例如眼底病变或视力问题。这些因素都可以导致眨眼频率的改变。
综上所述,人类平均一生眨眼几次约为5600000次左右。这个数字是由多个因素影响的结果,包括年龄、身体状况、情感状态等。
```
- 请用python编程语言实现快速排序算法的代码
以下是使用Python快速排序算法的示例代码:
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
else:
pivot = arr[0]
left = []
right = []
for i in range(1, len(arr)):
if arr[i] < pivot:
left.append(arr[i])
else:
right.append(arr[i])
return quick_sort(left) + [pivot] + quick_sort(right)
```
这个函数接受一个整数数组作为输入,并在最坏情况下的时间复杂度为O(n^2)的情况下,最佳情况下时间复杂度为O(nlogn)。
- 如何提高我的时间管理技能?
```
以下是一些提高时间管理技能的建议:
1. 制定计划:每天早上或每天开始时,制定一个详细的计划,包括每天需要完成的任务和目标。
2. 优先级排序:将任务按照优先级排序,先完成重要任务。
3. 集中注意力:集中注意力,将注意力集中在当前的任务上,避免分心。
4. 避免拖延:尽可能避免拖延,尽快完成任务,避免任务过期。
5. 避免多任务:多任务会增加任务的时间,所以最好只专注一个任务,直到完成它。
6. 休息:休息是恢复和提高生产力的关键,因此定期休息,放松身心。
7. 调整时间:如果发现自己的时间管理有问题,可以调整时间,例如延长休息时间或缩短任务时间。
8. 学习新技能:学习新的技能可以帮助提高工作效率,例如使用工具或软件来管理任务。
9. 寻求帮助:如果你发现自己难以管理时间,可以寻求家人、朋友或同事的帮助。
希望这些建议可以帮助您提高时间管理技能。
```
- Python和JavaScript编程语言的主要区别是什么?
```
Python和JavaScript是两个不同的编程语言,它们有以下主要区别:
1. 数据类型:Python是一种动态类型的编程语言,它无法指定数据类型,而是根据变量中使用的值自动确定数据类型。JavaScript是一种动态类型编程语言,它使用字面量来指定变量的数据类型。
2. 语法:Python和JavaScript的语法有一些相似之处,但也有一些不同之处。例如,Python使用缩进来定义代码块,而JavaScript使用花括号。
3. 执行环境:Python通常运行在本地计算机的Python解释器中运行,而JavaScript通常运行在浏览器中运行。这意味着Python通常用于开发桌面应用程序和Web服务器端应用程序,而JavaScript主要用于开发Web应用程序。
4. 编程语言特性:Python具有更高的可读性和可维护性,而JavaScript具有更好的可扩展性和可维护性。
总之,Python和JavaScript是两种不同的编程语言,具有不同的特点和应用场景。选择哪种编程语言取决于项目的具体需求和开发者的技能水平。
```
- 植物蛋白和动物蛋白的主要区别是什么?
```
植物蛋白和动物蛋白的主要区别在于来源和氨基酸成分。
来源:
植物蛋白主要来自植物组织,如豆类、种子、果实、叶子等。而动物蛋白主要来自动物的身体部位,如肌肉、内脏、鱼类、蛋类等。
氨基酸成分:
植物蛋白中的主要氨基酸是植物性氨基酸,如氨基酸,植物性必需氨基酸(如硫胺素和植酸)等。而动物蛋白中的主要氨基酸是动物性氨基酸,如肌肉蛋白、乳糖蛋白等。
因此,植物蛋白和动物蛋白的营养价值是不同的。植物蛋白富含植物性必需氨基酸,如硫胺素、植酸等,而动物蛋白则含有更多的动物性必需氨基酸,如肌肉蛋白、乳糖蛋白等。植物蛋白通常比动物蛋白低热量,但也有些植物蛋白的蛋白质质量较低。动物蛋白通常比植物蛋白高蛋白质质量,但同样含有较多的饱和脂肪和胆固醇等有害物质。
```
- 影响消费者行为的主要因素是什么?
```
影响消费者行为的因素有很多,以下是其中几个主要的因素:
1. 消费需求:消费者的需求是影响消费者行为的主要因素之一。消费者的需求是根据个人利益和价值观而产生的。例如,如果一个消费者喜欢时尚的服装,那么他们可能会对购买这些服装的商场和品牌的选择更有吸引力。
2. 购买力:消费者购买力的限制是一个影响消费者行为的因素。消费者可能会因为预算限制而选择购买某些商品。
3. 价格:价格也是影响消费者行为的因素之一。消费者通常会比较不同商品的价格,并根据价格选择购买哪种商品。
4. 品牌知名度:品牌知名度是影响消费者行为的因素之一。消费者通常会选择有信誉的品牌,因为他们相信这些品牌能够提供高质量的产品和服务。
5. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的作用。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
6. 社交媒体:社交媒体在影响消费者行为中扮演越来越重要的角色。许多消费者会在社交媒体上了解商品和品牌的信息,并选择购买这些商品。
总之,影响消费者行为的主要因素有很多,这些因素往往是相互关联的。企业可以通过了解这些因素,来制定有效的营销策略,吸引和留住消费者。
```
|
DuyTa/vi-whisper-medium-Lora
|
DuyTa
| 2023-08-12T09:13:22Z | 75 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"generated_from_trainer , Lora adapter implement",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2023-08-02T16:01:21Z |
---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer , Lora adapter implement
metrics:
- wer
model-index:
- name: vi_whisper-small
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: vin100h
type: vin
config: Cleaned
split: Train 0.8 , Test 0.2
metrics:
- name: Wer
type: wer
value: 21.68
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi_whisper-medium
This model is a one shot fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Vin100h dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2894
- Wer: 21.68 on whisper-quantized model by CTranslate2
-
# Model description
To use quantized model , firstly , read the doc of how to use CTranslate2 converter and Faster Whisper repo in here:
- [CTranslate2](https://github.com/OpenNMT/CTranslate2.git)
- [Faster-Whipser](https://github.com/guillaumekln/faster-whisper)
## Intended uses & limitations
More information needed
## Training and evaluation data
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 8000
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu11.8
- Datasets 2.13.1
- Tokenizers 0.13.3
- PEFT 0.5.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
|
csukuangfj/sherpa-onnx-tdnn-yesno
|
csukuangfj
| 2023-08-12T09:02:11Z | 0 | 0 | null |
[
"onnx",
"license:apache-2.0",
"region:us"
] | null | 2023-08-12T07:20:49Z |
---
license: apache-2.0
---
# Introduction
This repo contains onnx models for the [tdnn][tdnn] recipe of the [yesno][yesno] dataset
from [icefall][icefall].
Please see [sherpa-onnx][sherpa-onnx] for usages.
[yesno]: https://www.openslr.org/1/
[icefall]: https://github.com/k2-fsa/icefall
[sherpa-onnx]: https://github.com/k2-fsa/sherpa-onnx
[tdnn]: https://github.com/k2-fsa/icefall/tree/master/egs/yesno/ASR/tdnn
|
Blaxzter/whisper-large-v2-inference-endpoint
|
Blaxzter
| 2023-08-12T09:01:25Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-11T20:32:40Z |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- no
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
Compared to the Whisper large model, the large-v2 model is trained for 2.5x more epochs with added regularization
for improved performance.
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Large on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
3.0003583080317572
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-large-v2",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
Michelvh/flan-t5-mc-question-generation
|
Michelvh
| 2023-08-12T08:58:28Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-08-11T12:55:30Z |
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: flan-t5-mc-question-generation
results: []
inference:
parameters:
max_length: 512
num_beams: 4
length_penalty: 1.5
no_repeat_ngram_size: 3
early_stopping: True
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-mc-question-generation
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9047 | 0.25 | 100 | 1.4246 |
| 1.5894 | 0.51 | 200 | 1.3711 |
| 1.5355 | 0.76 | 300 | 1.3450 |
| 1.5041 | 1.02 | 400 | 1.3255 |
| 1.4858 | 1.27 | 500 | 1.3134 |
| 1.4711 | 1.53 | 600 | 1.3038 |
| 1.4576 | 1.78 | 700 | 1.2951 |
| 1.4466 | 2.04 | 800 | 1.2888 |
| 1.4405 | 2.29 | 900 | 1.2836 |
| 1.4284 | 2.55 | 1000 | 1.2794 |
| 1.4228 | 2.8 | 1100 | 1.2758 |
| 1.4234 | 3.06 | 1200 | 1.2719 |
| 1.4104 | 3.31 | 1300 | 1.2690 |
| 1.4147 | 3.56 | 1400 | 1.2666 |
| 1.41 | 3.82 | 1500 | 1.2637 |
| 1.3996 | 4.07 | 1600 | 1.2622 |
| 1.4015 | 4.33 | 1700 | 1.2600 |
| 1.3958 | 4.58 | 1800 | 1.2583 |
| 1.395 | 4.84 | 1900 | 1.2566 |
| 1.3899 | 5.09 | 2000 | 1.2553 |
| 1.3929 | 5.35 | 2100 | 1.2542 |
| 1.3884 | 5.6 | 2200 | 1.2529 |
| 1.3884 | 5.86 | 2300 | 1.2523 |
| 1.3821 | 6.11 | 2400 | 1.2520 |
| 1.3886 | 6.37 | 2500 | 1.2513 |
| 1.3865 | 6.62 | 2600 | 1.2510 |
| 1.3841 | 6.87 | 2700 | 1.2509 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
hdduytran/autotrain-cat-dog-testing-81779141856
|
hdduytran
| 2023-08-12T08:56:50Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"swin",
"image-classification",
"autotrain",
"vision",
"dataset:hdduytran/autotrain-data-cat-dog-testing",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-12T08:56:02Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- hdduytran/autotrain-data-cat-dog-testing
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.2783185695727747
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 81779141856
- CO2 Emissions (in grams): 0.2783
## Validation Metrics
- Loss: 0.030
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
Long1405/whisper-small-youtube-podcast-data
|
Long1405
| 2023-08-12T08:48:16Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-12T05:15:30Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-youtube-podcast-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-youtube-podcast-data
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6011
- Wer: 60.8819
- Cer: 48.7902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0118 | 8.7 | 1000 | 0.5323 | 58.0111 | 45.8276 |
| 0.0014 | 17.39 | 2000 | 0.6011 | 60.8819 | 48.7902 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
TinToTin/ppo-SnowballTarget
|
TinToTin
| 2023-08-12T08:39:53Z | 8 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-12T08:39:40Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Thineshan/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lvkaokao/llama2-7b-hf-chat-lora
|
lvkaokao
| 2023-08-12T08:29:18Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2023-08-12T07:49:14Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
saad10007/donut-base-sroie
|
saad10007
| 2023-08-12T08:24:03Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-10T16:36:09Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Ayeshaayman/donut-base-sroie
|
Ayeshaayman
| 2023-08-12T08:07:01Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-08-11T12:31:53Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Mitsua/vroid-diffusion-test
|
Mitsua
| 2023-08-12T07:27:02Z | 52 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"dataset:Mitsua/vroid-image-dataset-lite",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-09T07:05:50Z |
---
license: other
datasets:
- Mitsua/vroid-image-dataset-lite
pipeline_tag: text-to-image
---
# Model Card for VRoid Diffusion
<!-- Provide a quick summary of what the model is/does. -->
This is a latent text-to-image diffusion model to demonstrate how U-Net training affects the generated images.
- Text Encoder is from [OpenCLIP ViT-H/14](https://github.com/mlfoundations/open_clip), MIT License, Training Data : LAION-2B
- VAE is from [Mitsua Diffusion One](https://huggingface.co/Mitsua/mitsua-diffusion-one), Mitsua Open RAIL-M License, Training Data: Public Domain/CC0 + Licensed
- U-Net is trained from scratch using full version of [VRoid Image Dataset Lite](https://huggingface.co/datasets/Mitsua/vroid-image-dataset-lite) with some modifications.
- VRoid is a trademark or registered trademark of Pixiv inc. in Japan and other regions.
## Model Details
- `vroid_diffusion_test.safetensors`
- base variant.
- `vroid_diffusion_test_invert_red_blue.safetensors`
- `red` and `blue` in the caption is swapped.
- `pink` and `skyblue` in the caption is swapped.
- `vroid_diffusion_test_monochrome.safetensors`
- all training images are converted to grayscale.
## Model Variant
- [VRoid Diffusion Unconditional](https://huggingface.co/Mitsua/vroid-diffusion-test-unconditional)
- This is unconditional image generator without CLIP.
### Model Description
- **Developed by:** Abstract Engine.
- **License:** Mitsua Open RAIL-M License.
## Uses
### Direct Use
Text-to-Image generation for research and educational purposes.
### Out-of-Scope Use
Any deployed use case of the model.
## Training Details
- Trained resolution : 256x256
- Batch Size : 48
- Steps : 45k
- LR : 1e-5 with warmup 1000 steps
### Training Data
We use full version of [VRoid Image Dataset Lite](https://huggingface.co/datasets/Mitsua/vroid-image-dataset-lite) with some modifications.
|
chriskim2273/IOTNation_Classification_Model_0.4_SMALLER_CLEANED_SET_4k
|
chriskim2273
| 2023-08-12T07:05:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-12T06:54:33Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: IOTNation_Classification_Model_0.4_SMALLER_CLEANED_SET_4k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IOTNation_Classification_Model_0.4_SMALLER_CLEANED_SET_4k
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1392
- Accuracy: 0.9658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
weiren119/Taiwan-LLaMa-v1.0-4bits-GPTQ
|
weiren119
| 2023-08-12T07:05:13Z | 7 | 9 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"zh",
"dataset:yentinglin/zh_TW_c4",
"dataset:yentinglin/traditional_chinese_instructions",
"arxiv:2305.13711",
"arxiv:2104.09864",
"license:llama2",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-12T02:29:51Z |
---
datasets:
- yentinglin/zh_TW_c4
- yentinglin/traditional_chinese_instructions
inference: false
license: llama2
language:
- zh
model_creator: Yen-Ting Lin
model_link: https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0
model_name: Language Models for Taiwanese Culture 1.0
model_type: llama
quantized_by: weiren119
---
<!-- header start -->
<!-- header end -->
# Taiwan-LLaMa-v1.0-GPTQ
- Model creator: [Yen-Ting Lin](https://huggingface.co/yentinglin)
- Original model: [Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0)
## Description
This repo contains GPTQ format model files for [Yen-Ting Lin's Language Models for Taiwanese Culture v1.0](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0).
## Intro
- The 4bits-GQTQ model was converted from [Taiwan-LLaMa-v1.0 13b](https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0) by the package [auto-gptq](https://github.com/PanQiWei/AutoGPTQ)
## How to use gptq model pyhton code
- Install gptq package: `pip install auto-gptq`
- Here is the example code
```
from transformers import AutoTokenizer,TextStreamer,TextIteratorStreamer
from auto_gptq import AutoGPTQForCausalLM
class TaiwanLLaMaGPTQ:
def __init__(self, model_dir):
self.tokenizer = AutoTokenizer.from_pretrained(model_dir, use_fast=True)
self.model = AutoGPTQForCausalLM.from_quantized(model_dir,
trust_remote_code=True,
use_safetensors=True,
device_map="auto",
use_triton=False,
strict=False)
self.chat_history = []
self.system_prompt = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."""
self.streamer = TextStreamer(self.tokenizer, skip_prompt=True, skip_special_tokens=True)
self.thread_streamer = TextIteratorStreamer(self.tokenizer, skip_special_tokens=True)
def get_prompt(self, message: str, chat_history: list[tuple[str, str]]) -> str:
texts = [f'[INST] <<SYS>>\n{self.system_prompt}\n<</SYS>>\n\n']
for user_input, response in chat_history:
texts.append(f'{user_input.strip()} [/INST] {response.strip()} </s><s> [INST] ')
texts.append(f'{message.strip()} [/INST]')
return ''.join(texts)
def generate(self, message: str):
prompt = self.get_prompt(message, self.chat_history)
tokens = self.tokenizer(prompt, return_tensors='pt').input_ids
generate_ids = self.model.generate(input_ids=tokens.cuda(), max_new_tokens=4096, streamer=self.streamer)
output = self.tokenizer.decode(generate_ids[0, len(tokens[0]):-1]).strip()
self.chat_history.append([message, output])
return output
def thread_generate(self, message:str):
from threading import Thread
prompt = self.get_prompt(message, self.chat_history)
inputs = self.tokenizer(prompt, return_tensors="pt")
generation_kwargs = dict(
inputs=inputs.input_ids.cuda(),
attention_mask=inputs.attention_mask,
temperature=0.1,
max_new_tokens=1024,
streamer=self.thread_streamer,
)
# Run generation on separate thread to enable response streaming.
thread = Thread(target=self.model.generate, kwargs=generation_kwargs)
thread.start()
for new_text in self.thread_streamer:
yield new_text
thread.join()
inferencer = TaiwanLLaMaGPTQ("weiren119/Taiwan-LLaMa-v1.0-4bits-GPTQ")
s = ''
while True:
s = input("User: ")
if s != '':
print ('Answer:')
print (inferencer.generate(s))
print ('-'*80)
```
<!-- footer start -->
<!-- footer end -->
# Original model card: Yen-Ting Lin's Language Models for Taiwanese Culture v1.0
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Chinese applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional chinese.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Chinese Support**: The model is fine-tuned to understand and generate text in Traditional Chinese, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Chinese pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_chinese_instructions" target="_blank">yentinglin/traditional_chinese_instructions</a> |
| Traditional Chinese Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Chinese.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional Chinese for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Chinese language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
raannakasturi/VILID-assets
|
raannakasturi
| 2023-08-12T06:43:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-22T12:40:25Z |
---
license: creativeml-openrail-m
---
|
KeWE90210/distilbert-base-uncased-finetuned-emotion
|
KeWE90210
| 2023-08-12T06:31:46Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-12T06:11:04Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9225222595524254
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2245
- Accuracy: 0.9225
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3260 | 0.9015 | 0.9004 |
| No log | 2.0 | 500 | 0.2245 | 0.9225 | 0.9225 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
JuiThe/mt5large_lora_Wreview
|
JuiThe
| 2023-08-12T06:16:24Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T06:16:22Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
davzoku/cria-llama2-7b-v1.0
|
davzoku
| 2023-08-12T05:45:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T05:45:22Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Shobhank-iiitdwd/t5_qg
|
Shobhank-iiitdwd
| 2023-08-12T05:42:06Z | 170 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-08T06:26:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_modified_for_t5_qg
model-index:
- name: t5-end2end-questions-answers-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-answers-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3810
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5388 | 0.34 | 100 | 1.7772 |
| 1.8647 | 0.68 | 200 | 1.6304 |
| 1.7367 | 1.02 | 300 | 1.5443 |
| 1.6048 | 1.36 | 400 | 1.4884 |
| 1.5559 | 1.69 | 500 | 1.4590 |
| 1.5309 | 2.03 | 600 | 1.4440 |
| 1.465 | 2.37 | 700 | 1.4215 |
| 1.4601 | 2.71 | 800 | 1.4078 |
| 1.4439 | 3.05 | 900 | 1.4123 |
| 1.3988 | 3.39 | 1000 | 1.4108 |
| 1.3896 | 3.73 | 1100 | 1.3915 |
| 1.3781 | 4.07 | 1200 | 1.3927 |
| 1.3557 | 4.41 | 1300 | 1.3849 |
| 1.3476 | 4.75 | 1400 | 1.3877 |
| 1.3419 | 5.08 | 1500 | 1.3836 |
| 1.3203 | 5.42 | 1600 | 1.3765 |
| 1.3135 | 5.76 | 1700 | 1.3754 |
| 1.3251 | 6.1 | 1800 | 1.3794 |
| 1.3004 | 6.44 | 1900 | 1.3786 |
| 1.299 | 6.78 | 2000 | 1.3810 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Hemanth-thunder/lora-trained-xl-hmth
|
Hemanth-thunder
| 2023-08-12T05:30:30Z | 26 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-11T14:06:02Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of hmth
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Hemanth-thunder/lora-trained-xl-colab
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of hmth using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
zetazlife/fisura-hormigon
|
zetazlife
| 2023-08-12T05:25:14Z | 168 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-11T18:56:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: fisura-hormigon
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.99775
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fisura-hormigon
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0123
- Accuracy: 0.9978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.0989 | 0.17 | 500 | 0.0477 | 0.9871 |
| 0.0531 | 0.33 | 1000 | 0.0493 | 0.9865 |
| 0.046 | 0.5 | 1500 | 0.0381 | 0.9894 |
| 0.0392 | 0.67 | 2000 | 0.1129 | 0.9734 |
| 0.0459 | 0.83 | 2500 | 0.0364 | 0.9904 |
| 0.0255 | 1.0 | 3000 | 0.0305 | 0.9934 |
| 0.0188 | 1.17 | 3500 | 0.0247 | 0.9949 |
| 0.0222 | 1.33 | 4000 | 0.0206 | 0.9921 |
| 0.02 | 1.5 | 4500 | 0.0154 | 0.9952 |
| 0.0191 | 1.67 | 5000 | 0.0132 | 0.9952 |
| 0.0141 | 1.83 | 5500 | 0.0294 | 0.9905 |
| 0.0201 | 2.0 | 6000 | 0.0155 | 0.9968 |
| 0.0114 | 2.17 | 6500 | 0.0161 | 0.9965 |
| 0.0071 | 2.33 | 7000 | 0.0124 | 0.9975 |
| 0.0083 | 2.5 | 7500 | 0.0141 | 0.9969 |
| 0.0143 | 2.67 | 8000 | 0.0242 | 0.9932 |
| 0.0088 | 2.83 | 8500 | 0.0123 | 0.9972 |
| 0.0034 | 3.0 | 9000 | 0.0120 | 0.9972 |
| 0.0064 | 3.17 | 9500 | 0.0100 | 0.9978 |
| 0.0012 | 3.33 | 10000 | 0.0166 | 0.996 |
| 0.006 | 3.5 | 10500 | 0.0110 | 0.998 |
| 0.0007 | 3.67 | 11000 | 0.0126 | 0.9972 |
| 0.0034 | 3.83 | 11500 | 0.0122 | 0.9979 |
| 0.0057 | 4.0 | 12000 | 0.0123 | 0.9978 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
zuvi/videomae-base-finetuned-kinetics-finetuned-k400-subset
|
zuvi
| 2023-08-12T05:13:40Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2023-08-07T18:29:20Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-kinetics-finetuned-k400-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-kinetics-finetuned-k400-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4419
- eval_accuracy: 0.7382
- eval_runtime: 28603.1881
- eval_samples_per_second: 2.662
- eval_steps_per_second: 0.042
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 19340
### Framework versions
- Transformers 4.30.2
- Pytorch 2.1.0.dev20230627
- Datasets 2.13.1
- Tokenizers 0.13.3
|
mutisya/wav2vec2-1b-adapters-mer-drL-v1.2
|
mutisya
| 2023-08-12T05:05:22Z | 33 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-11T13:34:06Z |
---
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-1b-adapters-mer-drL-v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-1b-adapters-mer-drL-v1.2
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.2542 | 0.02 | 100 | inf | 0.6865 |
| 1.0706 | 0.05 | 200 | inf | 0.6425 |
| 1.054 | 0.07 | 300 | inf | 0.6273 |
| 1.131 | 0.1 | 400 | inf | 0.6269 |
| 1.091 | 0.12 | 500 | inf | 0.6120 |
| 1.209 | 0.14 | 600 | inf | 0.6234 |
| 1.032 | 0.17 | 700 | inf | 0.6036 |
| 1.0873 | 0.19 | 800 | inf | 0.5984 |
| 1.1786 | 0.21 | 900 | inf | 0.6164 |
| 1.0065 | 0.24 | 1000 | inf | 0.5984 |
| 1.1682 | 0.26 | 1100 | inf | 0.9431 |
| 2.716 | 0.29 | 1200 | inf | 1.0 |
| 4.6494 | 0.31 | 1300 | inf | 1.0 |
| 6.5668 | 0.33 | 1400 | inf | 1.0 |
| 8.2033 | 0.36 | 1500 | inf | 0.9997 |
| 8.4349 | 0.38 | 1600 | inf | 1.0 |
| 8.8253 | 0.41 | 1700 | inf | 1.0000 |
| 10.0956 | 0.43 | 1800 | inf | 1.0000 |
| 10.1367 | 0.45 | 1900 | inf | 1.0000 |
| 10.2369 | 0.48 | 2000 | inf | 1.0000 |
| 10.185 | 0.5 | 2100 | inf | 1.0000 |
| 10.1101 | 0.53 | 2200 | inf | 1.0000 |
| 10.1882 | 0.55 | 2300 | inf | 1.0000 |
| 10.1791 | 0.57 | 2400 | inf | 1.0000 |
| 10.095 | 0.6 | 2500 | inf | 1.0000 |
| 10.0778 | 0.62 | 2600 | inf | 1.0000 |
| 10.1407 | 0.64 | 2700 | inf | 1.0000 |
| 10.2779 | 0.67 | 2800 | inf | 1.0000 |
| 10.0577 | 0.69 | 2900 | inf | 1.0000 |
| 10.1342 | 0.72 | 3000 | inf | 1.0000 |
| 10.0638 | 0.74 | 3100 | inf | 1.0000 |
| 10.19 | 0.76 | 3200 | inf | 1.0000 |
| 10.2655 | 0.79 | 3300 | inf | 1.0000 |
| 10.1485 | 0.81 | 3400 | inf | 1.0000 |
| 10.2903 | 0.84 | 3500 | inf | 1.0000 |
| 10.1934 | 0.86 | 3600 | inf | 1.0000 |
| 10.038 | 0.88 | 3700 | inf | 1.0000 |
| 10.1181 | 0.91 | 3800 | inf | 1.0000 |
| 10.1547 | 0.93 | 3900 | inf | 1.0000 |
| 10.2849 | 0.95 | 4000 | inf | 1.0000 |
| 10.1119 | 0.98 | 4100 | inf | 1.0000 |
| 10.2269 | 1.0 | 4200 | inf | 1.0000 |
| 10.1069 | 1.03 | 4300 | inf | 1.0000 |
| 10.2036 | 1.05 | 4400 | inf | 1.0000 |
| 10.1252 | 1.07 | 4500 | inf | 1.0000 |
| 10.0869 | 1.1 | 4600 | inf | 1.0000 |
| 9.9904 | 1.12 | 4700 | inf | 1.0000 |
| 10.1395 | 1.15 | 4800 | inf | 1.0000 |
| 10.0352 | 1.17 | 4900 | inf | 1.0000 |
| 10.3128 | 1.19 | 5000 | inf | 1.0000 |
| 10.1161 | 1.22 | 5100 | inf | 1.0000 |
| 10.1318 | 1.24 | 5200 | inf | 1.0000 |
| 10.1863 | 1.27 | 5300 | inf | 1.0000 |
| 10.1645 | 1.29 | 5400 | inf | 1.0000 |
| 10.3267 | 1.31 | 5500 | inf | 1.0000 |
| 9.9707 | 1.34 | 5600 | inf | 1.0000 |
| 10.2071 | 1.36 | 5700 | inf | 1.0000 |
| 10.0865 | 1.38 | 5800 | inf | 1.0000 |
| 10.3051 | 1.41 | 5900 | inf | 1.0000 |
| 10.203 | 1.43 | 6000 | inf | 1.0000 |
| 10.0152 | 1.46 | 6100 | inf | 1.0000 |
| 10.1636 | 1.48 | 6200 | inf | 1.0000 |
| 10.1885 | 1.5 | 6300 | inf | 1.0000 |
| 10.1876 | 1.53 | 6400 | inf | 1.0000 |
| 10.1075 | 1.55 | 6500 | inf | 1.0000 |
| 10.1307 | 1.58 | 6600 | inf | 1.0000 |
| 10.3877 | 1.6 | 6700 | inf | 1.0000 |
| 10.1684 | 1.62 | 6800 | inf | 1.0000 |
| 10.0601 | 1.65 | 6900 | inf | 1.0000 |
| 10.3244 | 1.67 | 7000 | inf | 1.0000 |
| 10.2978 | 1.69 | 7100 | inf | 1.0000 |
| 10.2394 | 1.72 | 7200 | inf | 1.0000 |
| 10.0721 | 1.74 | 7300 | inf | 1.0000 |
| 10.1697 | 1.77 | 7400 | inf | 1.0000 |
| 10.3378 | 1.79 | 7500 | inf | 1.0000 |
| 10.1207 | 1.81 | 7600 | inf | 1.0000 |
| 10.1188 | 1.84 | 7700 | inf | 1.0000 |
| 10.0966 | 1.86 | 7800 | inf | 1.0000 |
| 10.2581 | 1.89 | 7900 | inf | 1.0000 |
| 10.219 | 1.91 | 8000 | inf | 1.0000 |
| 10.272 | 1.93 | 8100 | inf | 1.0000 |
| 10.1932 | 1.96 | 8200 | inf | 1.0000 |
| 10.0127 | 1.98 | 8300 | nan | 1.0 |
| 0.0 | 2.01 | 8400 | nan | 1.0 |
| 0.0 | 2.03 | 8500 | nan | 1.0 |
| 0.0 | 2.05 | 8600 | nan | 1.0 |
| 0.0 | 2.08 | 8700 | nan | 1.0 |
| 0.0 | 2.1 | 8800 | nan | 1.0 |
| 0.0 | 2.12 | 8900 | nan | 1.0 |
| 0.0 | 2.15 | 9000 | nan | 1.0 |
| 0.0 | 2.17 | 9100 | nan | 1.0 |
| 0.0 | 2.2 | 9200 | nan | 1.0 |
| 0.0 | 2.22 | 9300 | nan | 1.0 |
| 0.0 | 2.24 | 9400 | nan | 1.0 |
| 0.0 | 2.27 | 9500 | nan | 1.0 |
| 0.0 | 2.29 | 9600 | nan | 1.0 |
| 0.0 | 2.32 | 9700 | nan | 1.0 |
| 0.0 | 2.34 | 9800 | nan | 1.0 |
| 0.0 | 2.36 | 9900 | nan | 1.0 |
| 0.0 | 2.39 | 10000 | nan | 1.0 |
| 0.0 | 2.41 | 10100 | nan | 1.0 |
| 0.0 | 2.43 | 10200 | nan | 1.0 |
| 0.0 | 2.46 | 10300 | nan | 1.0 |
| 0.0 | 2.48 | 10400 | nan | 1.0 |
| 0.0 | 2.51 | 10500 | nan | 1.0 |
| 0.0 | 2.53 | 10600 | nan | 1.0 |
| 0.0 | 2.55 | 10700 | nan | 1.0 |
| 0.0 | 2.58 | 10800 | nan | 1.0 |
| 0.0 | 2.6 | 10900 | nan | 1.0 |
| 0.0 | 2.63 | 11000 | nan | 1.0 |
| 0.0 | 2.65 | 11100 | nan | 1.0 |
| 0.0 | 2.67 | 11200 | nan | 1.0 |
| 0.0 | 2.7 | 11300 | nan | 1.0 |
| 0.0 | 2.72 | 11400 | nan | 1.0 |
| 0.0 | 2.75 | 11500 | nan | 1.0 |
| 0.0 | 2.77 | 11600 | nan | 1.0 |
| 0.0 | 2.79 | 11700 | nan | 1.0 |
| 0.0 | 2.82 | 11800 | nan | 1.0 |
| 0.0 | 2.84 | 11900 | nan | 1.0 |
| 0.0 | 2.86 | 12000 | nan | 1.0 |
| 0.0 | 2.89 | 12100 | nan | 1.0 |
| 0.0 | 2.91 | 12200 | nan | 1.0 |
| 0.0 | 2.94 | 12300 | nan | 1.0 |
| 0.0 | 2.96 | 12400 | nan | 1.0 |
| 0.0 | 2.98 | 12500 | nan | 1.0 |
| 0.0 | 3.01 | 12600 | nan | 1.0 |
| 0.0 | 3.03 | 12700 | nan | 1.0 |
| 0.0 | 3.06 | 12800 | nan | 1.0 |
| 0.0 | 3.08 | 12900 | nan | 1.0 |
| 0.0 | 3.1 | 13000 | nan | 1.0 |
| 0.0 | 3.13 | 13100 | nan | 1.0 |
| 0.0 | 3.15 | 13200 | nan | 1.0 |
| 0.0 | 3.17 | 13300 | nan | 1.0 |
| 0.0 | 3.2 | 13400 | nan | 1.0 |
| 0.0 | 3.22 | 13500 | nan | 1.0 |
| 0.0 | 3.25 | 13600 | nan | 1.0 |
| 0.0 | 3.27 | 13700 | nan | 1.0 |
| 0.0 | 3.29 | 13800 | nan | 1.0 |
| 0.0 | 3.32 | 13900 | nan | 1.0 |
| 0.0 | 3.34 | 14000 | nan | 1.0 |
| 0.0 | 3.37 | 14100 | nan | 1.0 |
| 0.0 | 3.39 | 14200 | nan | 1.0 |
| 0.0 | 3.41 | 14300 | nan | 1.0 |
| 0.0 | 3.44 | 14400 | nan | 1.0 |
| 0.0 | 3.46 | 14500 | nan | 1.0 |
| 0.0 | 3.49 | 14600 | nan | 1.0 |
| 0.0 | 3.51 | 14700 | nan | 1.0 |
| 0.0 | 3.53 | 14800 | nan | 1.0 |
| 0.0 | 3.56 | 14900 | nan | 1.0 |
| 0.0 | 3.58 | 15000 | nan | 1.0 |
| 0.0 | 3.6 | 15100 | nan | 1.0 |
| 0.0 | 3.63 | 15200 | nan | 1.0 |
| 0.0 | 3.65 | 15300 | nan | 1.0 |
| 0.0 | 3.68 | 15400 | nan | 1.0 |
| 0.0 | 3.7 | 15500 | nan | 1.0 |
| 0.0 | 3.72 | 15600 | nan | 1.0 |
| 0.0 | 3.75 | 15700 | nan | 1.0 |
| 0.0 | 3.77 | 15800 | nan | 1.0 |
| 0.0 | 3.8 | 15900 | nan | 1.0 |
| 0.0 | 3.82 | 16000 | nan | 1.0 |
| 0.0 | 3.84 | 16100 | nan | 1.0 |
| 0.0 | 3.87 | 16200 | nan | 1.0 |
| 0.0 | 3.89 | 16300 | nan | 1.0 |
| 0.0 | 3.92 | 16400 | nan | 1.0 |
| 0.0 | 3.94 | 16500 | nan | 1.0 |
| 0.0 | 3.96 | 16600 | nan | 1.0 |
| 0.0 | 3.99 | 16700 | nan | 1.0 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.3
|
KarthiAru/peft-lora-starcoder-personal-copilot-A100-40GB-colab
|
KarthiAru
| 2023-08-12T04:22:06Z | 7 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:adapter:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-11T03:43:58Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: peft-lora-starcoder-personal-copilot-A100-40GB-colab
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder-personal-copilot-A100-40GB-colab
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.66 | 0.05 | 100 | 0.5844 |
| 0.6223 | 0.1 | 200 | 0.5280 |
| 0.6601 | 0.15 | 300 | 0.4819 |
| 0.5526 | 0.2 | 400 | 0.4617 |
| 0.485 | 0.25 | 500 | 0.4593 |
| 0.5239 | 0.3 | 600 | 0.4492 |
| 0.489 | 0.35 | 700 | 0.4371 |
| 0.5582 | 0.4 | 800 | 0.4362 |
| 0.4688 | 0.45 | 900 | 0.4314 |
| 0.5415 | 0.5 | 1000 | 0.4227 |
| 0.5152 | 0.55 | 1100 | 0.4121 |
| 0.5243 | 0.6 | 1200 | 0.3967 |
| 0.414 | 0.65 | 1300 | 0.3954 |
| 0.557 | 0.7 | 1400 | 0.3926 |
| 0.4144 | 0.75 | 1500 | 0.3911 |
| 0.7935 | 0.8 | 1600 | 0.3896 |
| 0.4129 | 0.85 | 1700 | 0.3866 |
| 0.4549 | 0.9 | 1800 | 0.3877 |
| 0.3903 | 0.95 | 1900 | 0.3781 |
| 0.4945 | 1.0 | 2000 | 0.3627 |
### Framework versions
- PEFT 0.4.0
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
jlpan/starcoder-finetuned-cpp2py-2
|
jlpan
| 2023-08-12T04:20:00Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:bigcode/starcoder",
"base_model:finetune:bigcode/starcoder",
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-09T06:39:08Z |
---
license: bigcode-openrail-m
base_model: bigcode/starcoder
tags:
- generated_from_trainer
model-index:
- name: starcoder-finetuned-cpp2py-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# starcoder-finetuned-cpp2py-2
This model is a fine-tuned version of [bigcode/starcoder](https://huggingface.co/bigcode/starcoder) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3296 | 0.1 | 100 | 0.2904 |
| 0.312 | 0.2 | 200 | 0.2813 |
| 0.3061 | 0.3 | 300 | 0.2789 |
| 0.3012 | 0.4 | 400 | 0.2772 |
| 0.301 | 0.5 | 500 | 0.2762 |
| 0.3014 | 0.6 | 600 | 0.2756 |
| 0.2975 | 0.7 | 700 | 0.2751 |
| 0.2975 | 0.8 | 800 | 0.2748 |
| 0.2989 | 0.9 | 900 | 0.2748 |
| 0.2984 | 1.0 | 1000 | 0.2748 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mustain/mod_jaqads_falcon_7B
|
Mustain
| 2023-08-12T04:18:42Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-12T04:18:15Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0.dev0
|
Yntec/WoopWoopRemix
|
Yntec
| 2023-08-12T04:14:05Z | 3,937 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"photorealistic",
"general",
"art",
"zoidbb",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-12T03:30:10Z |
---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- photorealistic
- general
- art
- zoidbb
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
# WoopWoopRemix
A mix of WoopWoopPhoto and WoopWoopGeneral to get the best of both worlds.
Sample and prompt:

Pretty Cute Girl, sitting, holding black bottle, beautiful detailed pajamas, gorgeous detailed hair, Magazine ad, iconic, 1943, from the movie, sharp focus, Detailed Chibi Eyes. visible brushstrokes by Kyoani and artgerm and Clay Mann and leyendecker and Dave Rapoza
Original page: https://prompthero.com/ai-models/woopwoop-photo-download (model can't be downloaded anymore)
|
edwardjjj/policy_gradient_cartpole
|
edwardjjj
| 2023-08-12T04:13:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-12T04:13:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: policy_gradient_cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
fvonhoven/ControlNet-endpoint
|
fvonhoven
| 2023-08-12T04:02:40Z | 0 | 1 | null |
[
"stable-diffusion",
"stable-diffusion-diffusers",
"controlnet",
"endpoints-template",
"arxiv:2302.05543",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2023-06-28T22:05:59Z |
---
license: openrail
tags:
- stable-diffusion
- stable-diffusion-diffusers
- controlnet
- endpoints-template
thumbnail: >-
https://huggingface.co/philschmid/ControlNet-endpoint/resolve/main/thumbnail.png
inference: true
duplicated_from: philschmid/ControlNet-endpoint
---
# Inference Endpoint for [ControlNet](https://huggingface.co/lllyasviel/ControlNet) using [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
> ControlNet is a neural network structure to control diffusion models by adding extra conditions.
> Official repository: https://github.com/lllyasviel/ControlNet
---
Blog post: [Controlled text to image generation with Inference Endpoints]()
This repository implements a custom `handler` task for `controlled text-to-image` generation on 🤗 Inference Endpoints. The code for the customized pipeline is in the [handler.py](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/ControlNet-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`

### expected Request payload
```json
{
"inputs": "A prompt used for image generation",
"negative_prompt": "low res, bad anatomy, worst quality, low quality",
"controlnet_type": "depth",
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC",
}
```
supported `controlnet_type` are: `canny_edge`, `pose`, `depth`, `scribble`, `segmentation`, `normal`, `hed`, `hough`
below is an example on how to run a request using Python and `requests`.
## Use Python to send requests
1. Get image
```
wget https://huggingface.co/datasets/diffusers/test-arrays/resolve/main/stable_diffusion_imgvar/input_image_vermeer.png
```
2. Use the following code to send a request to the endpoint
```python
import json
from typing import List
import requests as r
import base64
from PIL import Image
from io import BytesIO
ENDPOINT_URL = "" # your endpoint url
HF_TOKEN = "" # your huggingface token `hf_xxx`
# helper image utils
def encode_image(image_path):
with open(image_path, "rb") as i:
b64 = base64.b64encode(i.read())
return b64.decode("utf-8")
def predict(prompt, image, negative_prompt=None, controlnet_type = "normal"):
image = encode_image(image)
# prepare sample payload
request = {"inputs": prompt, "image": image, "negative_prompt": negative_prompt, "controlnet_type": controlnet_type}
# headers
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json",
"Accept": "image/png" # important to get an image back
}
response = r.post(ENDPOINT_URL, headers=headers, json=request)
if response.status_code != 200:
print(response.text)
raise Exception("Prediction failed")
img = Image.open(BytesIO(response.content))
return img
prediction = predict(
prompt = "cloudy sky background lush landscape house and green trees, RAW photo (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3",
negative_prompt ="lowres, bad anatomy, worst quality, low quality, city, traffic",
controlnet_type = "hed",
image = "huggingface.png"
)
prediction.save("result.png")
```
```
expected output

[Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
The abstract of the paper is the following:
We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.
|
BADeid/Joy_Training_PT_4
|
BADeid
| 2023-08-12T03:58:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-08-12T03:51:42Z |
---
license: creativeml-openrail-m
---
|
nerdylive/my-awesome-model
|
nerdylive
| 2023-08-12T03:53:56Z | 48 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-12T03:53:39Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my-awesome-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5019
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.5019 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
BlueDancer/Artisanica_v2.1a1
|
BlueDancer
| 2023-08-12T03:39:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-29T02:27:45Z |
I was carefully picking models, that are realistic, beautiful, well trained, popular and sales safe. This is how Artisanica has happened and so for the second time and I am satisfied with the merge - hopefully you will be too!
Artisanica is focused on realism but is able to do basically anything and very detailed. It should create also very beautiful environments and creative details of the final image.
Works really well with hiresfix, ultimate sd upscale and adetailer!
And I will be happy to see what you create with Artisanica, please share it here and I hope you will enjoy the use!
Artisanica is a child of this beautiful parents /alphabetically):
**Analog Madness - Realistic model** (analogMadness_v50)
**epiCRealism** (epicrealism_pureEvolutionV3)
**LazyMix+** (Real Amateur Nudes) (lazymixRealAmateur_v30a)
**Liberty** (liberty_main)
**Real Elden Apocalypse** (realeldenapocalypse_Analogsexknoll)
**Realistic Vision** (realisticVisionV30_v30VAE)
**rMadArt (SD1.5)** (rmadartSD15_v100Test)
**RPG** (rpg_V4)
**Thanks to all wonderful creators and mergers of previous models!!! <3**
---
license: openrail
---
|
Ashwan/Y56
|
Ashwan
| 2023-08-12T03:32:35Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"chemistry",
"zero-shot-classification",
"aa",
"dataset:fka/awesome-chatgpt-prompts",
"license:openrail",
"region:us"
] |
zero-shot-classification
| 2023-08-12T03:31:14Z |
---
license: openrail
datasets:
- fka/awesome-chatgpt-prompts
language:
- aa
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: zero-shot-classification
tags:
- chemistry
---
|
dicovan/univ_model
|
dicovan
| 2023-08-12T03:04:10Z | 0 | 0 | null |
[
"es",
"license:llama2",
"region:us"
] | null | 2023-08-12T03:03:07Z |
---
license: llama2
language:
- es
---
|
underdogliu1005/spkrec-resnet-voxceleb
|
underdogliu1005
| 2023-08-12T02:56:02Z | 7 | 0 |
speechbrain
|
[
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ResNet",
"TDNN",
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null | 2023-08-01T07:03:44Z |
---
language: "en"
thumbnail:
tags:
- speechbrain
- embeddings
- Speaker
- Verification
- Identification
- pytorch
- ResNet
- TDNN
license: "apache-2.0"
datasets:
- voxceleb
metrics:
- EER
widget:
- example_title: VoxCeleb Speaker id10003
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
- example_title: VoxCeleb Speaker id10004
src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ResNet embeddings on Voxceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ResNet TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1 + Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
| 29-07-23 | 1.05 | 0.1082 |
## Pipeline description
This system is composed of an ResNet TDNN model. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/spkrec-resnet-voxceleb")
signal, fs =torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="speechbrain/spkrec-resnet-voxceleb", savedir="pretrained_models/spkrec-resnet-voxceleb")
score, prediction = verification.verify_files("speechbrain/spkrec-resnet-voxceleb/example1.wav", "speechbrain/spkrec-resnet-voxceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_resnet.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ResNet TDNN
```
@article{VILLALBA2020101026,
title = {State-of-the-art speaker recognition with neural network embeddings in NIST SRE18 and Speakers in the Wild evaluations},
journal = {Computer Speech & Language},
volume = {60},
pages = {101026},
year = {2020},
doi = {https://doi.org/10.1016/j.csl.2019.101026},
author = {Jesús Villalba and Nanxin Chen and David Snyder and Daniel Garcia-Romero and Alan McCree and Gregory Sell and Jonas Borgstrom and Leibny Paola García-Perera and Fred Richardson and Réda Dehak and Pedro A. Torres-Carrasquillo and Najim Dehak},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
hoangdeeptry/whisper-small-collected-data
|
hoangdeeptry
| 2023-08-12T02:46:24Z | 69 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-08-11T21:59:37Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-collected-data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-collected-data
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6221
- Wer: 55.5283
- Cer: 44.5095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.0069 | 11.63 | 1000 | 0.5614 | 53.7696 | 42.3371 |
| 0.0011 | 23.26 | 2000 | 0.6221 | 55.5283 | 44.5095 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
coreml-community/coreml-realisticVision-v51VAE_cn
|
coreml-community
| 2023-08-12T02:27:45Z | 0 | 16 | null |
[
"coreml",
"stable-diffusion",
"text-to-image",
"not-for-all-eyes",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-27T04:21:14Z |
---
license: creativeml-openrail-m
tags:
- coreml
- stable-diffusion
- text-to-image
- not-for-all-eyes
---
# Core ML Converted Model:
- This model was converted to [Core ML for use on Apple Silicon devices](https://github.com/apple/ml-stable-diffusion). Conversion instructions can be found [here](https://github.com/godly-devotion/MochiDiffusion/wiki/How-to-convert-ckpt-or-safetensors-files-to-Core-ML).
- Provide the model to an app such as **Mochi Diffusion** [Github](https://github.com/godly-devotion/MochiDiffusion) / [Discord](https://discord.gg/x2kartzxGv) to generate images.
- `split_einsum` version is compatible with all compute unit options including Neural Engine.
- `original` version is only compatible with `CPU & GPU` option.
- Custom resolution versions are tagged accordingly.
- The `vae-ft-mse-840000-ema-pruned.ckpt` VAE is embedded into the model.
- This model was converted with a `vae-encoder` for use with `image2image`.
- This model is `fp16`.
- Descriptions are posted as-is from original model source.
- Not all features and/or results may be available in `CoreML` format.
- This model does not have the [unet split into chunks](https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml).
- This model does not include a `safety checker` (for NSFW content).
- This model can be used with ControlNet
# realisticVision-v51VAE_cn:
Source(s): [CivitAI](https://civitai.com/models/4201)
## Realistic Vision V5.1
**This version is a fix for many issues of 5.0 (artifacts, ugly faces, masculine female faces, small eyes, poor compatibility with LoRA and TI).**
**In terms of realism this model may be worse than 5.0, but I tried to minimize losses.**
## Realistic Vision V5.0
**Finally the very final version of the model is here. I think this is the best update of the model. I was amazed at its quality when I generated the images, even though I am not a master at generating images.**
**Realistic Vision based on SD1.5 is finished. Thank you all for your great support, for your comments, feedback and reviews on the model.**
- This model is available on Mage.Space, Sinkin.ai and Smugo
- Please support my friend's model, he will be happy about it - "Life Like Diffusion"
- All my models on [Hugging Face](https://huggingface.co/SG161222)
### Please read this! How to remove strong contrast:
- To make the image less contrasty you can use LoRA [Detail Tweaker LoRA](https://civitai.com/models/58390/detail-tweaker-lora-lora) in a negative value.
## I use this template to get good generation results:
Prompt:
- RAW photo, subject, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
Negative Prompt:
- (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime), text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, UnrealisticDream
- (deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation, UnrealisticDream
Euler A or DPM++ SDE Karras
CFG Scale 3,5 - 7
Hires. fix with 4x-UltraSharp upscaler
Denoising strength 0.25-0.45
Upscale by 1.1-2.0
Clip Skip 1-2
ENSD 31337
### Thanks to the creators of these models for their work. Without them it would not have been possible to create this model.
HassanBlend 1.5.1.2 by sdhassan
Uber Realistic Porn Merge (URPM) by saftle
Protogen x3.4 (Photorealism) + Protogen x5.3 (Photorealism) by darkstorm2150
Art & Eros (aEros) + RealEldenApocalypse by aine_captain
Dreamlike Photoreal 2.0 by sviasem
HASDX by bestjammer
Analog Diffusion by wavymulder
WoopWoop-Photo by zoidbb
Life Like Diffusion by lutherjonna409
AbsoluteReality by Lykon
CyberRealistic by Cyberdelia
Analog Madness by CornmeisterNL
A-Zovya Photoreal by Zovya
ICBINP - "I Can't Believe It's Not Photography" by residentchiefnz
epiCRealism by epinikion
Juggernaut by KandooAI<br><br>




|
Eggsbena/model_0011
|
Eggsbena
| 2023-08-12T02:22:56Z | 25 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-12T02:15:00Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
Evan-Lin/Bart-abs-yelp-entailment-1
|
Evan-Lin
| 2023-08-12T02:21:52Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"trl",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-08-12T02:17:50Z |
---
license: apache-2.0
tags:
- trl
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="Evan-Lin//tmp/tmplljnchio/Evan-Lin/Bart-abs-yelp-entailment-1")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("Evan-Lin//tmp/tmplljnchio/Evan-Lin/Bart-abs-yelp-entailment-1")
model = AutoModelForCausalLMWithValueHead.from_pretrained("Evan-Lin//tmp/tmplljnchio/Evan-Lin/Bart-abs-yelp-entailment-1")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Eggsbena/model_0010
|
Eggsbena
| 2023-08-12T02:10:56Z | 25 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-08-12T02:02:16Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
simsim314/WizardLM-70B-V1.0-HF
|
simsim314
| 2023-08-12T01:11:14Z | 1,391 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T22:10:14Z |
---
license: llama2
---
Float16 of [WizardLM/WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
```
from transformers import LlamaTokenizer, AutoModelForCausalLM
tokenizer = LlamaTokenizer.from_pretrained("simsim314/WizardLM-70B-V1.0-HF")
model = AutoModelForCausalLM.from_pretrained("simsim314/WizardLM-70B-V1.0-HF")
```
|
RazzzHF/heard
|
RazzzHF
| 2023-08-12T01:09:51Z | 0 | 1 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2023-08-12T01:08:25Z |
---
license: cc-by-nc-nd-4.0
---
|
gsrujana/output
|
gsrujana
| 2023-08-12T00:55:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-08-12T00:05:56Z |
---
license: openrail++
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of a black backpack
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - gsrujana/output
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained on a photo of a black backpack using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding
|
AmelieSchreiber
| 2023-08-12T00:09:11Z | 9 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-09T05:25:32Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t6_8M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `80/20` train/test split. This model was trained with class weighting due to the imbalanced nature
of the RNA binding site dataset (fewer binding sites than non-binding sites). You can train your own version
using [this notebook](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding/blob/main/LoRA_binding_sites_no_sweeps_v2.ipynb)!
You just need the RNA `binding_sites.xml` file [found here](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites).
You may also need to run some `pip install` statements at the beginning of the script. If you are running in colab run:
```python
!pip install transformers[torch] datasets peft -q
```
```python
!pip install accelerate -U -q
```
Try to improve upon these metrics by adjusting the hyperparameters:
```
{'eval_loss': 0.49476009607315063,
'eval_precision': 0.14372964169381108,
'eval_recall': 0.7526652452025586,
'eval_f1': 0.24136752136752138,
'eval_auc': 0.7710141129858947,
'epoch': 15.0}
```
A similar model can also be trained using the Github with a training script and conda env YAML, which can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). This version uses wandb sweeps for hyperparameter search.
However, it does not use class weighting.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use the model, try running the following pip install statements:
```python
!pip install transformers peft -q
```
then try tunning:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding"
# ESM2 base model
base_model_path = "facebook/esm2_t6_8M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding
|
AmelieSchreiber
| 2023-08-12T00:07:48Z | 7 | 1 |
peft
|
[
"peft",
"transformers",
"biology",
"esm",
"esm2",
"protein",
"protein language model",
"en",
"license:mit",
"region:us"
] | null | 2023-08-09T23:02:42Z |
---
library_name: peft
license: mit
language:
- en
tags:
- transformers
- biology
- esm
- esm2
- protein
- protein language model
---
# ESM-2 RNA Binding Site LoRA
This is a Parameter Efficient Fine Tuning (PEFT) Low Rank Adaptation (LoRA) of
the [esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) model for the (binary) token classification task of
predicting RNA binding sites of proteins. You can also find a version of this model
that was fine-tuned without LoRA [here](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_UR50D_rna_binding_site_predictor).
## Training procedure
This is a Low Rank Adaptation (LoRA) of `esm2_t12_35M_UR50D`,
trained on `166` protein sequences in the [RNA binding sites dataset](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites)
using a `80/20` train/test split. This model was trained with class weighting due to the imbalanced nature
of the RNA binding site dataset (fewer binding sites than non-binding sites). You can train your own version
using [this notebook](https://huggingface.co/AmelieSchreiber/esm2_t6_8M_weighted_lora_rna_binding/blob/main/LoRA_binding_sites_no_sweeps_v2.ipynb)!
You just need the RNA `binding_sites.xml` file [found here](https://huggingface.co/datasets/AmelieSchreiber/data_of_protein-rna_binding_sites).
You may also need to run some `pip install` statements at the beginning of the script. If you are running in colab run:
```python
!pip install transformers[torch] datasets peft -q
```
```python
!pip install accelerate -U -q
```
Try to improve upon these metrics by adjusting the hyperparameters:
```
{'eval_loss': 0.4705817401409149,
'eval_precision': 0.1656961355214399,
'eval_recall': 0.7417061611374408,
'eval_f1': 0.27087840761575077,
'eval_auc': 0.7983065882100389,
'epoch': 9.0}
```
A similar model can also be trained using the Github with a training script and conda env YAML, which can be
[found here](https://github.com/Amelie-Schreiber/esm2_LoRA_binding_sites/tree/main). This version uses wandb sweeps for hyperparameter search.
However, it does not use class weighting.
### Framework versions
- PEFT 0.4.0
## Using the Model
To use the model, try running the following pip install statements:
```python
!pip install transformers peft -q
```
then try tunning:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
from peft import PeftModel
import torch
# Path to the saved LoRA model
model_path = "AmelieSchreiber/esm2_t12_35M_weighted_lora_rna_binding"
# ESM2 base model
base_model_path = "facebook/esm2_t12_35M_UR50D"
# Load the model
base_model = AutoModelForTokenClassification.from_pretrained(base_model_path)
loaded_model = PeftModel.from_pretrained(base_model, model_path)
# Ensure the model is in evaluation mode
loaded_model.eval()
# Load the tokenizer
loaded_tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Protein sequence for inference
protein_sequence = "MAVPETRPNHTIYINNLNEKIKKDELKKSLHAIFSRFGQILDILVSRSLKMRGQAFVIFKEVSSATNALRSMQGFPFYDKPMRIQYAKTDSDIIAKMKGT" # Replace with your actual sequence
# Tokenize the sequence
inputs = loaded_tokenizer(protein_sequence, return_tensors="pt", truncation=True, max_length=1024, padding='max_length')
# Run the model
with torch.no_grad():
logits = loaded_model(**inputs).logits
# Get predictions
tokens = loaded_tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) # Convert input ids back to tokens
predictions = torch.argmax(logits, dim=2)
# Define labels
id2label = {
0: "No binding site",
1: "Binding site"
}
# Print the predicted labels for each token
for token, prediction in zip(tokens, predictions[0].numpy()):
if token not in ['<pad>', '<cls>', '<eos>']:
print((token, id2label[prediction]))
```
|
DrishtiSharma/distilhubert-finetuned-gtzan-bs-8-fp16-false
|
DrishtiSharma
| 2023-08-11T23:55:41Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-08-11T21:52:36Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-bs-8-fp16-false
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.84
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-bs-8-fp16-false
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7162
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0985 | 1.0 | 113 | 1.9610 | 0.47 |
| 1.4194 | 2.0 | 226 | 1.3265 | 0.69 |
| 1.0868 | 3.0 | 339 | 0.9856 | 0.72 |
| 0.9134 | 4.0 | 452 | 0.8709 | 0.74 |
| 0.6349 | 5.0 | 565 | 0.7497 | 0.79 |
| 0.3755 | 6.0 | 678 | 0.7343 | 0.78 |
| 0.4007 | 7.0 | 791 | 0.5346 | 0.84 |
| 0.1607 | 8.0 | 904 | 0.5604 | 0.86 |
| 0.1802 | 9.0 | 1017 | 0.5005 | 0.89 |
| 0.0319 | 10.0 | 1130 | 0.6562 | 0.84 |
| 0.0158 | 11.0 | 1243 | 0.6639 | 0.84 |
| 0.1126 | 12.0 | 1356 | 0.6965 | 0.85 |
| 0.0095 | 13.0 | 1469 | 0.6919 | 0.84 |
| 0.0083 | 14.0 | 1582 | 0.7089 | 0.85 |
| 0.0088 | 15.0 | 1695 | 0.7162 | 0.84 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Bastian1111/a2c-PandaReachDense-v3
|
Bastian1111
| 2023-08-11T23:47:13Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-11T23:41:47Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.08
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LongSafari/hyenadna-tiny-1k-seqlen-d256
|
LongSafari
| 2023-08-11T23:47:10Z | 109 | 1 |
transformers
|
[
"transformers",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-08-11T22:26:31Z |
---
license: bsd-3-clause
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main)
- [tiny-1k-d256](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen-d256/tree/main)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main)
See [GPU requirements](#hardware) for each model.
### Sample snippet
This code example lets you select which pretrained model to load from HuggingFace, perform inference and get embeddings.
See the [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing) for these classes, or the ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) script in the main [github](https://github.com/HazyResearch/hyena-dna).
```python
# instantiate pretrained model
pretrained_model_name = 'hyenadna-medium-450k-seqlen'
max_length = 450_000
model = HyenaDNAPreTrainedModel.from_pretrained(
'./checkpoints',
pretrained_model_name,
)
# create tokenizer, no training involved :)
tokenizer = CharacterTokenizer(
characters=['A', 'C', 'G', 'T', 'N'], # add DNA characters
model_max_length=max_length,
)
# create a sample
sequence = 'ACTG' * int(max_length/4)
tok_seq = tokenizer(sequence)["input_ids"]
# place on device, convert to tensor
tok_seq = torch.LongTensor(tok_seq).unsqueeze(0).to(device) # unsqueeze for batch dim
# prep model and forward
model.to(device)
model.eval() # deterministic
with torch.inference_mode():
embeddings = model(tok_seq)
print(embeddings.shape) # embeddings here!
```
### How to use pretrained weights
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
The colab is the easiest entry point, you can finetune a small model, and do inference on DNA sequences up to 450k on the free tier (T4 GPU), and up to 1 million on the paid tier (A100). It handles all the HuggingFace integration for you, so it's helpful to see this example first.
- [github](https://github.com/HazyResearch/hyena-dna)
Otherwise, checkout of the main HyenaDNA repo for how to load weights into Pytorch Lightning. We use Pytorch Lightning for pretraining and fine-tuning all of our models. If you want to use our actual pretraining code, you can clone this HuggingFace repo to download the actual weights.ckpt, and then pass it to Pytorch Lightning via command line or config. See the [github](https://github.com/HazyResearch/hyena-dna) README for how to do all that.
If you want a standalone version that's easy to port into your own code (and not tied to our repo or Pytorch Lightning), we have that and a HuggingFace example in ['huggingface.py'](https://github.com/HazyResearch/hyena-dna/blob/main/huggingface.py) too.
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40, A100-40, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40, A100-40, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80, A100-80, A100-40)
T4: 16GB
A100-40: 40GB
A100-80: 80GB
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, etnguyen@stanford.edu
Michael Poli, poli@stanford.edu
Marjan Faizi, Marjan_Faizi@hms.harvard.edu
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
OpenBuddy/openbuddy-coder-15b-v10-bf16
|
OpenBuddy
| 2023-08-11T23:29:27Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T23:05:12Z |
---
license: bigscience-openrail-m
---
|
NotAboutWinning/hutao-aivoice
|
NotAboutWinning
| 2023-08-11T23:25:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-08-11T23:14:57Z |
this is by @makiligon on discord, i had to upload it here bcs i can only use it that way
|
RoussafiOmar/distilbert-base-uncased-finetuned-emotion
|
RoussafiOmar
| 2023-08-11T22:57:48Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-07T14:59:10Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9243661270394515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2286
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.874 | 1.0 | 250 | 0.3410 | 0.9045 | 0.9032 |
| 0.2644 | 2.0 | 500 | 0.2286 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e7_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T22:28:00Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T22:05:59Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
SamuelReyes/Reinforce-alg
|
SamuelReyes
| 2023-08-11T22:20:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-11T22:20:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-alg
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T22:19:31Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:57:00Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
whywynn/ppo-SnowballTarget
|
whywynn
| 2023-08-11T22:16:50Z | 25 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-11T20:35:59Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whywynn/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T22:02:35Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:44:18Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
MemerOwO/Chopper_JP
|
MemerOwO
| 2023-08-11T21:57:23Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2023-08-11T21:56:05Z |
---
license: bigcode-openrail-m
---
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e3_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T21:54:07Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:35:21Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e9_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:48:59Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:48:57Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
LovenOO/distilBERT_gptdata_with_preprocessing_grid_search
|
LovenOO
| 2023-08-11T21:47:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-11T20:48:32Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT_gptdata_with_preprocessing_grid_search
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_gptdata_with_preprocessing_grid_search
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2221
- Precision: 0.9563
- Recall: 0.9566
- F1: 0.9562
- Accuracy: 0.9561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 225 | 0.2531 | 0.9361 | 0.9359 | 0.9346 | 0.935 |
| No log | 2.0 | 450 | 0.1835 | 0.9514 | 0.9520 | 0.9512 | 0.9511 |
| 0.4372 | 3.0 | 675 | 0.1798 | 0.9543 | 0.9546 | 0.9539 | 0.9539 |
| 0.4372 | 4.0 | 900 | 0.2059 | 0.9499 | 0.9500 | 0.9497 | 0.9494 |
| 0.0575 | 5.0 | 1125 | 0.2002 | 0.9563 | 0.9567 | 0.9561 | 0.9561 |
| 0.0575 | 6.0 | 1350 | 0.2019 | 0.9557 | 0.9552 | 0.9553 | 0.955 |
| 0.0231 | 7.0 | 1575 | 0.2152 | 0.9548 | 0.9550 | 0.9546 | 0.9544 |
| 0.0231 | 8.0 | 1800 | 0.2156 | 0.9554 | 0.9556 | 0.9554 | 0.955 |
| 0.0116 | 9.0 | 2025 | 0.2240 | 0.9559 | 0.9561 | 0.9557 | 0.9556 |
| 0.0116 | 10.0 | 2250 | 0.2221 | 0.9563 | 0.9566 | 0.9562 | 0.9561 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e2_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T21:45:38Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:26:22Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
winterwinds/heptapod
|
winterwinds
| 2023-08-11T21:42:06Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2023-08-11T21:41:54Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e8_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:41:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:41:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e1_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T21:37:09Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:17:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e7_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:34:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:34:52Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e0_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T21:28:40Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:08:43Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
whywynn/ppo-PyramidsTraining
|
whywynn
| 2023-08-11T21:28:30Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-08-11T21:28:27Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: whywynn/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e6_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:27:53Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:27:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
jordyvl/vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd_rand
|
jordyvl
| 2023-08-11T21:26:22Z | 164 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-08-11T11:06:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd_rand
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip-small_rvl_cdip-NK1000_simkd_rand
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0648
- Accuracy: 0.6072
- Brier Loss: 0.5503
- Nll: 2.7228
- F1 Micro: 0.6072
- F1 Macro: 0.6102
- Ece: 0.1175
- Aurc: 0.1871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.0792 | 1.0 | 1000 | 0.0787 | 0.1103 | 0.9330 | 7.5575 | 0.1103 | 0.0531 | 0.0745 | 0.7817 |
| 0.0778 | 2.0 | 2000 | 0.0772 | 0.1867 | 0.9126 | 5.0754 | 0.1867 | 0.1367 | 0.1099 | 0.6583 |
| 0.0753 | 3.0 | 3000 | 0.0748 | 0.3185 | 0.8694 | 3.7575 | 0.3185 | 0.2711 | 0.2000 | 0.4734 |
| 0.0738 | 4.0 | 4000 | 0.0733 | 0.3633 | 0.8431 | 3.5732 | 0.3633 | 0.3239 | 0.2222 | 0.4034 |
| 0.0724 | 5.0 | 5000 | 0.0721 | 0.4083 | 0.8051 | 3.1556 | 0.4083 | 0.3747 | 0.2309 | 0.3543 |
| 0.0712 | 6.0 | 6000 | 0.0716 | 0.422 | 0.7734 | 3.1574 | 0.422 | 0.3887 | 0.1999 | 0.3595 |
| 0.07 | 7.0 | 7000 | 0.0703 | 0.4718 | 0.7584 | 2.9015 | 0.4718 | 0.4495 | 0.2548 | 0.2992 |
| 0.0693 | 8.0 | 8000 | 0.0696 | 0.493 | 0.7247 | 3.0406 | 0.493 | 0.4634 | 0.2430 | 0.2667 |
| 0.0683 | 9.0 | 9000 | 0.0693 | 0.4955 | 0.7122 | 3.1754 | 0.4955 | 0.4725 | 0.2329 | 0.2620 |
| 0.0675 | 10.0 | 10000 | 0.0682 | 0.528 | 0.6893 | 2.7012 | 0.528 | 0.5188 | 0.2477 | 0.2418 |
| 0.0668 | 11.0 | 11000 | 0.0680 | 0.5035 | 0.6961 | 2.9081 | 0.5035 | 0.4967 | 0.2243 | 0.2569 |
| 0.066 | 12.0 | 12000 | 0.0669 | 0.5595 | 0.6626 | 2.6583 | 0.5595 | 0.5541 | 0.2550 | 0.2169 |
| 0.0649 | 13.0 | 13000 | 0.0668 | 0.5493 | 0.6559 | 2.7433 | 0.5493 | 0.5525 | 0.2334 | 0.2165 |
| 0.0645 | 14.0 | 14000 | 0.0663 | 0.5645 | 0.6334 | 2.6699 | 0.5645 | 0.5605 | 0.2191 | 0.2099 |
| 0.0636 | 15.0 | 15000 | 0.0659 | 0.5765 | 0.6160 | 2.6406 | 0.5765 | 0.5664 | 0.2084 | 0.2012 |
| 0.0629 | 16.0 | 16000 | 0.0661 | 0.56 | 0.6311 | 2.7536 | 0.56 | 0.5686 | 0.2129 | 0.2026 |
| 0.0622 | 17.0 | 17000 | 0.0660 | 0.5725 | 0.6108 | 2.8055 | 0.5725 | 0.5707 | 0.1866 | 0.2046 |
| 0.0617 | 18.0 | 18000 | 0.0656 | 0.5697 | 0.6081 | 2.8309 | 0.5697 | 0.5730 | 0.1810 | 0.2024 |
| 0.0608 | 19.0 | 19000 | 0.0654 | 0.585 | 0.5982 | 2.6432 | 0.585 | 0.5867 | 0.1834 | 0.1975 |
| 0.06 | 20.0 | 20000 | 0.0656 | 0.584 | 0.5959 | 2.8363 | 0.584 | 0.5856 | 0.1662 | 0.2067 |
| 0.0592 | 21.0 | 21000 | 0.0657 | 0.5875 | 0.5896 | 2.8259 | 0.5875 | 0.5892 | 0.1575 | 0.2059 |
| 0.0584 | 22.0 | 22000 | 0.0655 | 0.5887 | 0.5832 | 2.8147 | 0.5887 | 0.5895 | 0.1531 | 0.1998 |
| 0.058 | 23.0 | 23000 | 0.0654 | 0.5945 | 0.5829 | 2.9399 | 0.5945 | 0.5955 | 0.1475 | 0.2007 |
| 0.0571 | 24.0 | 24000 | 0.0654 | 0.5962 | 0.5779 | 2.8266 | 0.5962 | 0.5982 | 0.1460 | 0.1996 |
| 0.0566 | 25.0 | 25000 | 0.0655 | 0.596 | 0.5815 | 2.9480 | 0.596 | 0.5975 | 0.1447 | 0.2099 |
| 0.0561 | 26.0 | 26000 | 0.0660 | 0.5883 | 0.5840 | 2.9985 | 0.5883 | 0.5903 | 0.1146 | 0.2202 |
| 0.0556 | 27.0 | 27000 | 0.0654 | 0.6042 | 0.5713 | 2.8775 | 0.6042 | 0.6052 | 0.1353 | 0.2020 |
| 0.055 | 28.0 | 28000 | 0.0655 | 0.5945 | 0.5750 | 3.0404 | 0.5945 | 0.5965 | 0.1215 | 0.2051 |
| 0.0546 | 29.0 | 29000 | 0.0655 | 0.5978 | 0.5740 | 2.9173 | 0.5978 | 0.6012 | 0.1226 | 0.2046 |
| 0.0543 | 30.0 | 30000 | 0.0657 | 0.588 | 0.5813 | 3.0493 | 0.588 | 0.5915 | 0.1210 | 0.2104 |
| 0.054 | 31.0 | 31000 | 0.0652 | 0.597 | 0.5715 | 2.9423 | 0.597 | 0.5989 | 0.1207 | 0.2055 |
| 0.0537 | 32.0 | 32000 | 0.0650 | 0.6075 | 0.5618 | 2.8731 | 0.6075 | 0.6080 | 0.1209 | 0.1987 |
| 0.0534 | 33.0 | 33000 | 0.0650 | 0.602 | 0.5651 | 2.7807 | 0.602 | 0.6046 | 0.1254 | 0.1988 |
| 0.0535 | 34.0 | 34000 | 0.0652 | 0.602 | 0.5661 | 3.0050 | 0.602 | 0.6068 | 0.1187 | 0.1977 |
| 0.053 | 35.0 | 35000 | 0.0649 | 0.6008 | 0.5603 | 2.8814 | 0.6008 | 0.6028 | 0.1172 | 0.1981 |
| 0.0527 | 36.0 | 36000 | 0.0649 | 0.5988 | 0.5575 | 2.8419 | 0.5988 | 0.5974 | 0.1156 | 0.1917 |
| 0.0526 | 37.0 | 37000 | 0.0649 | 0.598 | 0.5586 | 2.7982 | 0.598 | 0.5986 | 0.1173 | 0.1900 |
| 0.0524 | 38.0 | 38000 | 0.0646 | 0.604 | 0.5546 | 2.8202 | 0.604 | 0.6060 | 0.1244 | 0.1908 |
| 0.0524 | 39.0 | 39000 | 0.0651 | 0.5965 | 0.5627 | 2.8458 | 0.5965 | 0.6010 | 0.1125 | 0.1949 |
| 0.0522 | 40.0 | 40000 | 0.0649 | 0.6072 | 0.5515 | 2.7872 | 0.6072 | 0.6100 | 0.1211 | 0.1881 |
| 0.0521 | 41.0 | 41000 | 0.0648 | 0.6078 | 0.5542 | 2.7802 | 0.6078 | 0.6108 | 0.1199 | 0.1868 |
| 0.052 | 42.0 | 42000 | 0.0648 | 0.6 | 0.5557 | 2.7968 | 0.6 | 0.6029 | 0.1190 | 0.1940 |
| 0.0519 | 43.0 | 43000 | 0.0647 | 0.604 | 0.5503 | 2.7110 | 0.604 | 0.6060 | 0.1178 | 0.1896 |
| 0.0516 | 44.0 | 44000 | 0.0647 | 0.6065 | 0.5515 | 2.7595 | 0.6065 | 0.6089 | 0.1170 | 0.1870 |
| 0.0516 | 45.0 | 45000 | 0.0646 | 0.611 | 0.5496 | 2.7426 | 0.611 | 0.6129 | 0.1212 | 0.1873 |
| 0.0515 | 46.0 | 46000 | 0.0648 | 0.6082 | 0.5510 | 2.7436 | 0.6082 | 0.6120 | 0.1227 | 0.1876 |
| 0.0514 | 47.0 | 47000 | 0.0647 | 0.6088 | 0.5511 | 2.7379 | 0.6088 | 0.6115 | 0.1240 | 0.1874 |
| 0.0514 | 48.0 | 48000 | 0.0647 | 0.6095 | 0.5501 | 2.7369 | 0.6095 | 0.6122 | 0.1193 | 0.1868 |
| 0.0513 | 49.0 | 49000 | 0.0647 | 0.6095 | 0.5508 | 2.7295 | 0.6095 | 0.6122 | 0.1218 | 0.1870 |
| 0.0513 | 50.0 | 50000 | 0.0648 | 0.6072 | 0.5503 | 2.7228 | 0.6072 | 0.6102 | 0.1175 | 0.1871 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
Universal-NER/UniNER-7B-all
|
Universal-NER
| 2023-08-11T21:24:35Z | 5,132 | 89 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2308.03279",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T20:52:49Z |
---
license: cc-by-nc-4.0
language:
- en
---
---
# UniNER-7B-all
**Description**: This model is the best UniNER model. It is trained on the combinations of three data splits: (1) ChatGPT-generated [Pile-NER-type data](https://huggingface.co/datasets/Universal-NER/Pile-NER-type), (2) ChatGPT-generated [Pile-NER-definition data](https://huggingface.co/datasets/Universal-NER/Pile-NER-definition), and (3) 40 supervised datasets in the Universal NER benchmark (see Fig. 4 in paper), where we randomly sample up to 10K instances from the train split of each dataset. Note that CrossNER and MIT datasets are excluded from training for OOD evaluation.
Check our [paper](https://arxiv.org/abs/2308.03279) for more information. Check our [repo](https://github.com/universal-ner/universal-ner) about how to use the model.
## Inference
The template for inference instances is as follows:
<div style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
<strong>Prompting template:</strong><br/>
A virtual assistant answers questions from a user based on the provided text.<br/>
USER: Text: <span style="color: #d73a49;">{Fill the input text here}</span><br/>
ASSISTANT: I’ve read this text.<br/>
USER: What describes <span style="color: #d73a49;">{Fill the entity type here}</span> in the text?<br/>
ASSISTANT: <span style="color: #0366d6;">(model's predictions in JSON format)</span><br/>
</div>
### Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
## License
This model and its associated data are released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. They are primarily used for research purposes.
## Citation
```bibtex
@article{zhou2023universalner,
title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition},
author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
year={2023},
eprint={2308.03279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e5_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:20:52Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:20:51Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e-1_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T21:20:19Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:00:05Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
rootuj/first_classification_model
|
rootuj
| 2023-08-11T21:19:59Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-08-11T20:12:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: first_classification_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.92972
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2379
- Accuracy: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2324 | 1.0 | 1563 | 0.1936 | 0.9270 |
| 0.1526 | 2.0 | 3126 | 0.2379 | 0.9297 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Universal-NER/UniNER-7B-type-sup
|
Universal-NER
| 2023-08-11T21:12:48Z | 2,575 | 8 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"arxiv:2308.03279",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-11T18:12:34Z |
---
license: cc-by-nc-4.0
language:
- en
---
---
# UniNER-7B-type-sup
**Description**: This model was trained on the combination of two data sources: (1) ChatGPT-generated [Pile-NER-type data](https://huggingface.co/datasets/Universal-NER/Pile-NER-type), and (2) 40 supervised datasets in the Universal NER benchmark (see Fig. 4 in paper), where we randomly sample 10K instances from the train split of each dataset. Note that CrossNER and MIT datasets are excluded from training for OOD evaluation.
Check our [paper](https://arxiv.org/abs/2308.03279) for more information. Check our [repo](https://github.com/universal-ner/universal-ner) about how to use the model.
## Inference
The template for inference instances is as follows:
<div style="background-color: #f6f8fa; padding: 20px; border-radius: 10px; border: 1px solid #e1e4e8; box-shadow: 0 2px 5px rgba(0,0,0,0.1);">
<strong>Prompting template:</strong><br/>
A virtual assistant answers questions from a user based on the provided text.<br/>
USER: Text: <span style="color: #d73a49;">{Fill the input text here}</span><br/>
ASSISTANT: I’ve read this text.<br/>
USER: What describes <span style="color: #d73a49;">{Fill the entity type here}</span> in the text?<br/>
ASSISTANT: <span style="color: #0366d6;">(model's predictions in JSON format)</span><br/>
</div>
### Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.
## License
This model and its associated data are released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. They are primarily used for research purposes.
## Citation
```bibtex
@article{zhou2023universalner,
title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition},
author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
year={2023},
eprint={2308.03279},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e3_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:06:49Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:06:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e8_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T21:06:28Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T21:06:27Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
machinelearningzuu/detr-resnet-50_finetuned-normal-vs-disabled
|
machinelearningzuu
| 2023-08-11T21:05:27Z | 191 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-08-11T18:09:31Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned-normal-vs-disabled
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned-normal-vs-disabled
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dfomin/Reinforce-PixelCopter
|
dfomin
| 2023-08-11T20:52:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-09T20:44:44Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 27.90 +/- 27.14
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e6_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T20:48:05Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:48:04Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_prefix_tuning_500_10_3000_8_e0_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T20:43:24Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:43:20Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e5_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T20:38:54Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:38:53Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
onurwzy/FiratSobutay
|
onurwzy
| 2023-08-11T20:32:38Z | 0 | 0 | null |
[
"music",
"tr",
"license:openrail",
"region:us"
] | null | 2023-08-11T20:25:48Z |
---
license: openrail
language:
- tr
tags:
- music
---
Fırat Sobutay'ın videolarından topladığım sesleriyle yaptığım ses modeli.
|
KingKazma/xsum_gpt2_p_tuning_500_10_3000_8_e4_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T20:29:43Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:29:42Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_4_e9_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T20:20:13Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:20:12Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
gorkemgoknar/llama2-chatbot-merged
|
gorkemgoknar
| 2023-08-11T20:13:52Z | 140 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama2",
"metayazar",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-07T07:05:38Z |
---
language:
- en
tags:
- llama2
- metayazar
---
# Llama2 Movie Characters Finetuned
Finetuned over https://huggingface.co/TinyPixel/Llama-2-7B-bf16-sharded
## Prompt format
```
prompt = f"### Context: {overall_context}### History: {previous_dialogue} ### {character_1}: {question}### {character_2}:"
```
Space using this models 4 bit quantized version
https://huggingface.co/spaces/gorkemgoknar/moviechatbot-v2
## Some Fun dialogues
(There are more characters, find them!)
## Geralt of Witcher vs Ig-11 of Mandalorian
context= "on a cruise ship going to Mars from Earth"
#Gerald(Geralt Witcher, do not know why I named him incorrect) asks Ig-11 (Mandalorian)
initial_text = What do you think about AI?
```
Gerald : What do you think about AI?
Ig-11 : I've worked with them for more than half of my operational cycle.
Gerald : So, how long has the ship been here?
Ig-11 : 765 years (33 days) since last port.
Gerald : So this is the fourth time since I joined the ship that we've had a murderer on board?
Ig-11 : My master does not believe in coincidences... He believes everything happens for a reason. The universe has a plan, and it includes me being here at the same time as you. So, what do you think about AI?
Gerald : I'm not sure yet... I mean, they work on Earth, and maybe beyond.
Ig-11 : You see? It is all so simple! The humans are going to Mars; we are bringing them with us.
```
## Batman vs Tony Stark
context= "tries to save the world"
initial_text = "I am richer than you!"
```
Batman : I am richer than you!
Tony Stark : My car goes 100mph!
Batman : My car can go at least that fast, and it's not even suped up. everybody in the car. It didn't even have a license plate! It was so loud I couldn't hear myself think. It felt like we were gonna get killed any second...
Tony Stark : You need to chill out, Bruce. You can be such a bitch sometimes.~Tony Stark
Batman : You don't get it do you? It's not about justice. It's about power.~Bruce Wayne
Tony Stark : You know the one thing that really gets under my skin? When someone shoots himself in the foot so bad, he ends up putting on a hobble~Tony Stark
Batman : "Let me tell you something. Something I've learned to my great cost..."~Batman
Tony Stark : "You make it sound like a disease."
```
## Darth Vader vs James T. Kirk
```
Vader : I challenge you to battle of words!
Kirk : What are your terms?
Vader : The Emperor's terms!
Kirk : I challenge you to the ultimate game, a verbal battle of wit and wile. surely you know that no mere mortal can stand against my brilliance.
Vader : I accept your challenge. The terms I propose is, the winner shall have the pleasure of killing the loser!
Kirk : Deal! But you will be disappointed. Though my words are powerful and sharp, they lack the strength to kill.
Vader : You speak truthfully. My vocabulary lacks power. It is but a fraction of what it was 10 years ago at the height of my powers.
Kirk : I've never heard you speak like this before.
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_4_e8_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T20:11:50Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:11:48Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Amirhnrn/dqn-SpaceInvadersNoFrameskip-v4
|
Amirhnrn
| 2023-08-11T20:09:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-08-11T20:09:20Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 656.00 +/- 211.67
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Amirhnrn -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Amirhnrn -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Amirhnrn
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e7_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T20:04:32Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T20:04:28Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
LeWince/distilbert-base-uncased-finetuned-diabetes-v2
|
LeWince
| 2023-08-11T19:59:49Z | 23 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T18:39:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbert-base-uncased-finetuned-diabetes-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-diabetes-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3478
- Rouge1: 0.8752
- Rouge2: 0.5829
- Rougel: 0.5624
- Rougelsum: 0.5624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| 0.3883 | 1.0 | 5992 | 0.3311 | 0.7572 | 0.5600 | 0.5547 | 0.5547 |
| 0.3113 | 2.0 | 11984 | 0.3126 | 0.7785 | 0.5312 | 0.5344 | 0.5344 |
| 0.272 | 3.0 | 17976 | 0.2679 | 0.8374 | 0.5959 | 0.5629 | 0.5629 |
| 0.2361 | 4.0 | 23968 | 0.2686 | 0.8229 | 0.6013 | 0.5672 | 0.5672 |
| 0.2242 | 5.0 | 29960 | 0.2903 | 0.8145 | 0.5804 | 0.5604 | 0.5604 |
| 0.2128 | 6.0 | 35952 | 0.2885 | 0.8533 | 0.5852 | 0.5551 | 0.5551 |
| 0.1996 | 7.0 | 41944 | 0.2984 | 0.8526 | 0.5844 | 0.5602 | 0.5602 |
| 0.1875 | 8.0 | 47936 | 0.3170 | 0.8684 | 0.5852 | 0.5638 | 0.5638 |
| 0.1751 | 9.0 | 53928 | 0.3245 | 0.8702 | 0.5796 | 0.5604 | 0.5604 |
| 0.1652 | 10.0 | 59920 | 0.3478 | 0.8752 | 0.5829 | 0.5624 | 0.5624 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Bastian1111/ppo-SnowballTarget
|
Bastian1111
| 2023-08-11T19:59:19Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-08-11T19:59:16Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Bastian1111/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_8_e6_s6789_v3_l6_v100
|
KingKazma
| 2023-08-11T19:55:41Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T19:55:37Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
KingKazma/cnn_dailymail_gpt2_prompt_tuning_500_10_3000_4_e6_s6789_v3_l4_v100
|
KingKazma
| 2023-08-11T19:54:33Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-08-11T19:54:32Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.