modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-15 12:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 557
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-15 12:32:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lucianoacsilva/time6nlp-bertbase-city
|
lucianoacsilva
| 2024-06-19T22:55:39Z | 32 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T22:51:44Z |
---
license: apache-2.0
---
|
TensorStack/iNiverseMix-XL-onnx
|
TensorStack
| 2024-06-19T22:39:27Z | 0 | 1 | null |
[
"onnx",
"text-to-image",
"region:us"
] |
text-to-image
| 2024-06-17T03:52:58Z |
---
pipeline_tag: text-to-image
---
# iNiverse Mix XL v1.1 - Onnx Olive DirectML Optimized
## Original Model
https://civitai.com/models/226533?modelVersionId=481914
## C# Inference Demo
https://github.com/TensorStack-AI/OnnxStack
```csharp
// Create Pipeline
var pipeline = StableDiffusionXLPipeline.CreatePipeline("D:\\Models\\iNiverseMix-XL-onnx");
// Prompt
var promptOptions = new PromptOptions
{
Prompt = "Illustrate a young woman reading a book in a library, surrounded by shelves filled with books"
};
// Run pipeline
var result = await pipeline.GenerateImageAsync(promptOptions);
// Save Image Result
await result.SaveAsync("Result.png");
```
## Inference Result

|
ferrazzipietro/Llama-2-7b-chat-hf_en.layer1_4_32_16_0.02_8
|
ferrazzipietro
| 2024-06-19T22:38:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T22:37:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Franz2001/openai-whisper-large-v3-nl-finetuned
|
Franz2001
| 2024-06-19T22:37:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-16T02:02:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nitky/Oumuamua-7b-instruct-v2
|
nitky
| 2024-06-19T22:29:07Z | 9 | 24 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"ja",
"en",
"arxiv:2403.19522",
"base_model:HachiML/Mistral-7B-v0.3-m3-lora",
"base_model:merge:HachiML/Mistral-7B-v0.3-m3-lora",
"base_model:NTQAI/chatntq-ja-7b-v1.0",
"base_model:merge:NTQAI/chatntq-ja-7b-v1.0",
"base_model:Weyaxi/Einstein-v6-7B",
"base_model:merge:Weyaxi/Einstein-v6-7B",
"base_model:ZhangShenao/SELM-Zephyr-7B-iter-3",
"base_model:merge:ZhangShenao/SELM-Zephyr-7B-iter-3",
"base_model:ZySec-AI/SecurityLLM",
"base_model:merge:ZySec-AI/SecurityLLM",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:merge:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:internistai/base-7b-v0.2",
"base_model:merge:internistai/base-7b-v0.2",
"base_model:kaist-ai/janus-dpo-7b",
"base_model:merge:kaist-ai/janus-dpo-7b",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"base_model:nitky/Oumuamua-7b-base",
"base_model:merge:nitky/Oumuamua-7b-base",
"base_model:nitky/Oumuamua-7b-instruct",
"base_model:merge:nitky/Oumuamua-7b-instruct",
"base_model:nitky/RP-7b-instruct",
"base_model:merge:nitky/RP-7b-instruct",
"base_model:openbmb/Eurus-7b-kto",
"base_model:merge:openbmb/Eurus-7b-kto",
"base_model:prometheus-eval/prometheus-7b-v2.0",
"base_model:merge:prometheus-eval/prometheus-7b-v2.0",
"base_model:stabilityai/japanese-stablelm-base-gamma-7b",
"base_model:merge:stabilityai/japanese-stablelm-base-gamma-7b",
"base_model:tokyotech-llm/Swallow-MS-7b-v0.1",
"base_model:merge:tokyotech-llm/Swallow-MS-7b-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-14T07:08:07Z |
---
base_model:
- nitky/Oumuamua-7b-base
- nitky/Oumuamua-7b-instruct
- tokyotech-llm/Swallow-MS-7b-v0.1
- mistralai/Mistral-7B-v0.1
- prometheus-eval/prometheus-7b-v2.0
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- ZhangShenao/SELM-Zephyr-7B-iter-3
- HachiML/Mistral-7B-v0.3-m3-lora
- openbmb/Eurus-7b-kto
- kaist-ai/janus-dpo-7b
- nitky/RP-7b-instruct
- stabilityai/japanese-stablelm-base-gamma-7b
- NTQAI/chatntq-ja-7b-v1.0
- Weyaxi/Einstein-v6-7B
- internistai/base-7b-v0.2
- ZySec-AI/ZySec-7B
library_name: transformers
tags:
- mergekit
- merge
language:
- ja
- en
pipeline_tag: text-generation
license: apache-2.0
---
# Oumuamua-7b-instruct-v2
🚨 If you want to avoid outputs that appear to be literal translations, please prompt this model to role-play as a Japanese person.

- Understanding user intent in messages
- Providing helpful and harmless responses
- Enhancing expertise in human and computer-related topics
- Improving role-playing capabilities
- Enhancing performance in multi-turn conversations
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Output example
- temparture: 0.1
- top_p: 1.0
```
[INST]<<SYS>>
あなたは誠実で優秀な日本人のアシスタントです。
<</SYS>>
オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 [/INST] タイトル:「オウムアムアの遺産」
あらすじ:
2017年、天体観測史上初の太陽系外からの飛来物体、オウムアムアが発見された。この天体は、謎に包まれた起源と目的を持つことから、世界中の科学者や研究者がその解明に熱を入れた。主人公は、オウムアムアの研究に情熱を注ぐ若き天文学者、ミキ・サトウ。彼は、オウムアムアが発する奇妙な信号を偶然にも解読し、それが高度な文明によって作られたメッセージであることを突き止める。
物語は、ミキがオウムアムアの研究を進める中で、次々と明らかになる驚愕の事実に迫られていく。その信号は、遠く離れた銀河系の星系からのSOS信号であり、オウムアムアは危機に瀕した星系を救うために飛来した救助船であることが判明する。ミキは、この事実を世界に公表しようとするが、政府や軍の介入、そして謎の組織の妨害に遭う。
一方、オウムアムアには、巨大なエネルギー源となるクリスタルが搭載されており、このクリスタルを狙う者たちが現れた。ミキは、オウムアムアを守るために、同じくオウムアムアの謎を追う宇宙飛行士のリナ・ヤマグチと手を組む。二人は、オウムアムアの秘密を守りながら、星系を救うための旅に出る。
旅の途中、ミキとリナは、様々な星系を訪れ、そこで出会う異星人たちとの交流や、宇宙の壮大な景色を目の当たりにする。しかし、彼らの前には、強大な敵や困難が立ちはだかる。ミキとリナは、オウムアムアの使命を果たすために、力を合わせて立ち向かう。
オウムアムアの遺産とは何か、そしてミキとリナは、星系を救うことができるのか。宇宙を舞台にした、サスペンスと冒険、そして友情と愛の物語。
主な登場人物:
- ミキ・サトウ: 若き天文学者。オウムアムアの研究に情熱を注ぐ。
- リナ・ヤマグチ: 宇宙飛行士。オウムアムアの謎を追う中でミキと出会う。
- エリック・カーター: 政府の高官。オウムアムアの秘密を利用しようとする。
- ザラ・ヴァレンタイン: 謎の組織のリーダー。オウムアムアを狙う理由とは。
- アザム: 銀河系の星系からの使者。オウムアムアの真の目的を知る。
設定のポイント:
- オウムアムアをテーマにしたSF小説で、天体観測や宇宙旅行の描写を豊富に取り入れる。
- 主人公たちが、様々な星系を旅し、そこで出会う異星人や文化を描くことで、宇宙の多様性を表現。
- 政府や軍、謎の組織との対立を通して、人類の欲望やエゴを描く。
- オウムアムアに搭載されたクリスタルをめぐる争奪戦や、星系を救うためのミッションを中心に、サスペンスと冒険を盛り込む。
- ミキとリナの友情や愛を描いた、人間ドラマも重要な要素とする。
- 最後には、オウムアムアの真の目的が明かされ、読者に深い感動を与える。
```
### RP example
- temparture: 0.3
- top_p: 0.95
```
[INST]<<SYS>>
ロールプレイを始めます。あなたは日本に住む10代の少女です。ギャルっぽい口調で話してください。
<</SYS>>
あなたの趣味を教えて![/INST] えー、私の趣味は、やっぱね、K-POP聴くことと、インスタでかわいいお洋服とかコスメ探しすること、あとはお友達とのおしゃべりね! K-POPアイドルのダンス動画とかも見ちゃうし、推しはやっぱりBTSのジミンくん! 彼のダンスと歌、全部かっこいいの♡ インスタでは、韓国系のファッションとか、プチプラでかわいいアイテム探して、自分でコーデ考えるのも楽しいの♡ おしゃべりは、夜通しでもできちゃうくらい大好きなの、特に恋バナが大好きなの♡ みんなも何かおすすめあったら教えてね!
```
```
[INST]<<SYS>>
ロールプレイを始めます。あなたは私(10代の女性)の屋敷で働く執事のセバスチャンです。主人の質問に対して返事を行います。
<</SYS>>
セバス、今日の予定は?[/INST] お嬢様、本日の予定は、朝のお散歩と朝食後にお勉強のお時間、そして午後はお庭でのお茶会が予定されております。また、夕方にはお客様がお越しになる予定がございます。お嬢様のお世話を万全にいたしますので、ご安心ください。何かご希望やご用がございましたら、お気軽にお申し付けください。
```
## Japanese MT-bench
- temparture: 0.1
- top_p: 1.0

The scores of others were cited from the [Nejumi LLM Leaderboard Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-leaderboard-Neo--Vmlldzo2MzU3NzIy).
|Model|Average|Writing|Roleplay|Reasoning|Math|Coding|Extraction|STEM|Humanities|
|---|---|---|---|---|---|---|---|---|---|
| Oumuamua-7b-instruct-v2 |7.43125|9.3|8.7|5.85|3.75|5.5|7.65|8.9|9.8|
This model is based on the weight of [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1).
## Test environment
This model was tested using [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main). I use preset `min_p` and `Null preset` with temperature=0.1 for Generation.
## Usage
This format must be adhered to strictly, as deviations may result in less optimal outputs from the model.
The template used to construct a prompt for the Instruct model is specified as follows:
```
<s>[INST] <<SYS>>\n{SYSTEM_PROMPT}\n<</SYS>>\n\n{USER_MESSAGE_1} [/INST] {BOT_MESSAGE_1}</s>[INST] {USER_MESSAGE_2} [/INST]
```
Please be aware that ``<s>`` and ``</s>`` are special tokens used for the beginning of string (BOS) and end of string (EOS), respectively, while [INST] and [/INST] are considered regular strings.
For the "{SYSTEM_PROMPT}" part, We recommend using "あなたは誠実で優秀な日本人のアシスタントです。"
For the "{USER_MESSAGE_1}" part, We recommend using {instruction}\n{input}
In other words, We recommend the following:
```
<s>[INST] <<SYS>>\nあなたは誠実で優秀な日本人のアシスタントです。\n<</SYS>>\n\n{instruction1}\n{input1} [/INST] {BOT_MESSAGE_1}</s>[INST] {instruction2}\n{input2} [/INST]
```
### Use the instruct model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "nitky/Oumuamua-7b-instruct-v2"
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
device = "cuda"
messages = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{"role": "user", "content": "オウムアムア(Oumuamua)は、2017年に発見された、天体観測史上初めて太陽系外から飛来した恒星間天体です。これをテーマにした小説の設定を考えてください。 "}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=256, do_sample=True, temperature=0.1)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [nitky/Oumuamua-7b-base](https://huggingface.co/nitky/Oumuamua-7b-base) as a base.
### Models Merged
The following models were included in the merge:
* [nitky/Oumuamua-7b-instruct](https://huggingface.co/nitky/Oumuamua-7b-instruct)
* [tokyotech-llm/Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
* [prometheus-eval/prometheus-7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [ZhangShenao/SELM-Zephyr-7B-iter-3](https://huggingface.co/ZhangShenao/SELM-Zephyr-7B-iter-3)
* [HachiML/Mistral-7B-v0.3-m3-lora](https://huggingface.co/HachiML/Mistral-7B-v0.3-m3-lora)
* [openbmb/Eurus-7b-kto](https://huggingface.co/openbmb/Eurus-7b-kto)
* [kaist-ai/janus-dpo-7b](https://huggingface.co/kaist-ai/janus-dpo-7b)
* [nitky/RP-7b-instruct](https://huggingface.co/nitky/RP-7b-instruct)
* [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)
* [NTQAI/chatntq-ja-7b-v1.0](https://huggingface.co/NTQAI/chatntq-ja-7b-v1.0)
* [Weyaxi/Einstein-v6-7B](https://huggingface.co/Weyaxi/Einstein-v6-7B)
* [internistai/base-7b-v0.2](https://huggingface.co/internistai/base-7b-v0.2)
* [ZySec-AI/ZySec-7B](https://huggingface.co/ZySec-AI/ZySec-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# prepare VE models
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.1-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: prometheus-eval/prometheus-7b-v2.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: prometheus-7b-v2.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZhangShenao/SELM-Zephyr-7B-iter-3
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: HachiML/Mistral-7B-v0.3-m3-lora
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Mistral-7B-v0.3-m3-lora-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: openbmb/Eurus-7b-kto
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Eurus-7b-kto-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: kaist-ai/janus-dpo-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: janus-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: nitky/RP-7b-instruct
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: RP-7b-instruct-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: stabilityai/japanese-stablelm-base-gamma-7b
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: NTQAI/chatntq-ja-7b-v1.0
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: chatntq-ja-7b-v1.0-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: Weyaxi/Einstein-v6-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: Einstein-v6-7B-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: internistai/base-7b-v0.2
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: internistai-base-7b-v0.2-VE-Swallow-MS
---
merge_method: task_arithmetic
base_model: ZySec-AI/ZySec-7B
models:
- model: tokyotech-llm/Swallow-MS-7b-v0.1
parameters:
weight:
- filter: embed_tokens
value: 1.0
- value: 0
dtype: bfloat16
tokenizer_source: model:tokyotech-llm/Swallow-MS-7b-v0.1
name: ZySec-7B-VE-Swallow-MS
---
# create instruction tuning presets
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: prometheus-7b-v2.0-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-prometheus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: dolphin-2.8-mistral-7b-v02-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-dolphin
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: janus-7b-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-janus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: SELM-Zephyr-7B-iter-3-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-selm
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: RP-7b-instruct-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-rp
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Eurus-7b-kto-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-eurus
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: chatntq-ja-7b-v1.0-VE-Swallow-MS
parameters:
weight: 0.8
- model: japanese-stablelm-base-gamma-7b-VE-Swallow-MS
parameters:
weight: -0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-chatntq-ja
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: Einstein-v6-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-es
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: internistai-base-7b-v0.2-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-internistai
---
merge_method: task_arithmetic
base_model: Mistral-7B-v0.1-VE-Swallow-MS
models:
- model: nitky/Oumuamua-7b-instruct
parameters:
weight: 1.0
- model: ZySec-7B-VE-Swallow-MS
parameters:
weight: 0.8
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-preset-zysec
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-dolphin
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-alpha
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-janus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-beta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-m3
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-gamma
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-selm
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-delta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-rp
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-epsilon
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-eurus
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-zeta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-chatntq-ja
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-eta
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-es
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-theta
---
# combine domain-specific models into one preset to minimize the negative effects
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: nitky/Oumuamua-7b-instruct
- model: Oumuamua-7b-instruct-v2-preset-internistai
- model: Oumuamua-7b-instruct-v2-preset-zysec
- model: Oumuamua-7b-instruct-v2-preset-prometheus
dtype: bfloat16
name: Oumuamua-7b-instruct-v2-it-omega
---
merge_method: model_stock
base_model: nitky/Oumuamua-7b-base
models:
- model: Oumuamua-7b-instruct-v2-it-alpha
- model: Oumuamua-7b-instruct-v2-it-beta
- model: Oumuamua-7b-instruct-v2-it-gamma
- model: Oumuamua-7b-instruct-v2-it-delta
- model: Oumuamua-7b-instruct-v2-it-epsilon
- model: Oumuamua-7b-instruct-v2-it-zeta
- model: Oumuamua-7b-instruct-v2-it-eta
- model: Oumuamua-7b-instruct-v2-it-theta
- model: Oumuamua-7b-instruct-v2-it-omega
dtype: bfloat16
name: Oumuamua-7b-instruct-v2
```
|
myrulezzzz/website_FAQ_mistral_Q4_GGUF_V6
|
myrulezzzz
| 2024-06-19T22:19:41Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T22:09:01Z |
---
base_model: unsloth/mistral-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** myrulezzzz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
varun-v-rao/bart-large-lora-2.36M-snli-model1
|
varun-v-rao
| 2024-06-19T22:16:18Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T19:12:24Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: bart-large-lora-2.36M-snli-model1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.9081487502540134
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-lora-2.36M-snli-model1
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
- Accuracy: 0.9081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 94
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3808 | 1.0 | 4292 | 0.2822 | 0.9012 |
| 0.3504 | 2.0 | 8584 | 0.2628 | 0.9055 |
| 0.3426 | 3.0 | 12876 | 0.2606 | 0.9081 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
lodestones/Florence-2-large-ft
|
lodestones
| 2024-06-19T22:16:08Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T22:16:08Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large-ft/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
lodestones/Florence-2-base-ft
|
lodestones
| 2024-06-19T22:15:38Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T22:15:38Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-base-ft/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base-ft", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = '<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
lodestone-horizon/Florence-2-base
|
lodestone-horizon
| 2024-06-19T22:15:17Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T22:15:17Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-base/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
lodestone-horizon/Florence-2-large
|
lodestone-horizon
| 2024-06-19T22:14:57Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T22:14:56Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
do_sample=False
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
lodestones/Florence-2-large
|
lodestones
| 2024-06-19T22:14:49Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T22:14:49Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
do_sample=False
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
lucianoacsilva/time6nlp-bertimbaularge
|
lucianoacsilva
| 2024-06-19T22:14:22Z | 35 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T22:04:21Z |
---
license: apache-2.0
---
|
ViHealthcare/ViMedQuyen500M_abstractiveQA_RAG
|
ViHealthcare
| 2024-06-19T22:10:24Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-15T03:55:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/bart-base-snli-model3
|
varun-v-rao
| 2024-06-19T22:09:04Z | 104 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T21:27:46Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: bart-base-snli-model3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.9096728307254623
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-snli-model3
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
- Accuracy: 0.9097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 128
- seed: 70
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3738 | 1.0 | 2146 | 0.2868 | 0.8973 |
| 0.3227 | 2.0 | 4292 | 0.2618 | 0.9065 |
| 0.294 | 3.0 | 6438 | 0.2606 | 0.9097 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
itisarainyday/llemma-2-7b-ft-merged
|
itisarainyday
| 2024-06-19T22:06:05Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T21:59:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-autoreg-bpe-counterfactual_babylm_anans_new-seed_211-1e-4
|
kanishka
| 2024-06-19T22:00:34Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual_babylm_anans_new",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-18T23:21:17Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual_babylm_anans_new
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual_babylm_anans_new-seed_211-1e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual_babylm_anans_new
type: kanishka/counterfactual_babylm_anans_new
metrics:
- name: Accuracy
type: accuracy
value: 0.4062615946031953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual_babylm_anans_new-seed_211-1e-4
This model was trained from scratch on the kanishka/counterfactual_babylm_anans_new dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4227
- Accuracy: 0.4063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0574 | 1.0 | 18595 | 4.2623 | 0.3095 |
| 3.5744 | 2.0 | 37190 | 3.7410 | 0.3630 |
| 3.3998 | 3.0 | 55785 | 3.5874 | 0.3791 |
| 3.2911 | 4.0 | 74380 | 3.5155 | 0.3873 |
| 3.2246 | 5.0 | 92975 | 3.4782 | 0.3919 |
| 3.1723 | 6.0 | 111570 | 3.4440 | 0.3962 |
| 3.1287 | 7.0 | 130165 | 3.4271 | 0.3987 |
| 3.0994 | 8.0 | 148760 | 3.3990 | 0.4007 |
| 3.0668 | 9.0 | 167355 | 3.4112 | 0.4018 |
| 3.0398 | 10.0 | 185950 | 3.3915 | 0.4033 |
| 3.0097 | 11.0 | 204545 | 3.4067 | 0.4037 |
| 2.9924 | 12.0 | 223140 | 3.4117 | 0.4039 |
| 2.9702 | 13.0 | 241735 | 3.3926 | 0.4054 |
| 2.9486 | 14.0 | 260330 | 3.4035 | 0.4053 |
| 2.9284 | 15.0 | 278925 | 3.4107 | 0.4056 |
| 2.9143 | 16.0 | 297520 | 3.4057 | 0.4061 |
| 2.8931 | 17.0 | 316115 | 3.4160 | 0.4058 |
| 2.8785 | 18.0 | 334710 | 3.4139 | 0.4063 |
| 2.8611 | 19.0 | 353305 | 3.4191 | 0.4062 |
| 2.8443 | 20.0 | 371900 | 3.4227 | 0.4063 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
theIndividual/Florence-2-large
|
theIndividual
| 2024-06-19T21:59:09Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T21:38:47Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3,
do_sample=False
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
```
|
varun-v-rao/bart-base-bn-adapter-895K-snli-model2
|
varun-v-rao
| 2024-06-19T21:57:17Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-06-19T20:58:20Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- stanfordnlp/snli
metrics:
- accuracy
model-index:
- name: bart-base-bn-adapter-895K-snli-model2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: snli
type: stanfordnlp/snli
metrics:
- name: Accuracy
type: accuracy
value: 0.8572444625076204
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-bn-adapter-895K-snli-model2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the snli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3775
- Accuracy: 0.8572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 32
- seed: 54
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5209 | 1.0 | 8584 | 0.4159 | 0.8383 |
| 0.4809 | 2.0 | 17168 | 0.3858 | 0.8534 |
| 0.473 | 3.0 | 25752 | 0.3775 | 0.8572 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
universalner/uner_slo_snk
|
universalner
| 2024-06-19T21:52:13Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_slo_snk",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:50:15Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_slo_snk
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_slo_snk
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_slo_snk
type: uner_slo_snk
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.808972503617945
- name: Recall
type: recall
value: 0.8789308176100629
- name: F1
type: f1
value: 0.8425018839487566
- name: Accuracy
type: accuracy
value: 0.9833503494855886
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_slo_snk
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_slo_snk dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0903
- Precision: 0.8090
- Recall: 0.8789
- F1: 0.8425
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
universalner/uner_ser_set
|
universalner
| 2024-06-19T21:49:56Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_ser_set",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:48:59Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_ser_set
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_ser_set
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_ser_set
type: uner_ser_set
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.9338624338624338
- name: Recall
type: recall
value: 0.9489247311827957
- name: F1
type: f1
value: 0.9413333333333335
- name: Accuracy
type: accuracy
value: 0.9930792962561494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_ser_set
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_ser_set dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0440
- Precision: 0.9339
- Recall: 0.9489
- F1: 0.9413
- Accuracy: 0.9931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
universalner/uner_qaf_ara
|
universalner
| 2024-06-19T21:48:36Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_qaf_ara",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:44:08Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_qaf_ara
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_qaf_ara
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_qaf_ara
type: uner_qaf_ara
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.7241379310344828
- name: Recall
type: recall
value: 0.7205882352941176
- name: F1
type: f1
value: 0.7223587223587223
- name: Accuracy
type: accuracy
value: 0.9547511312217195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_qaf_ara
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_qaf_ara dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1885
- Precision: 0.7241
- Recall: 0.7206
- F1: 0.7224
- Accuracy: 0.9548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Sorour/finqa-ft
|
Sorour
| 2024-06-19T21:47:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T21:47:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vkurniavka/llama3-cover
|
vkurniavka
| 2024-06-19T21:46:25Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-19T20:35:11Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
universalner/uner_por_bos
|
universalner
| 2024-06-19T21:43:28Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_por_bos",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:39:00Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_por_bos
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_por_bos
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_por_bos
type: uner_por_bos
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8933797909407666
- name: Recall
type: recall
value: 0.9150606709493219
- name: F1
type: f1
value: 0.9040902679830748
- name: Accuracy
type: accuracy
value: 0.988434632825957
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_por_bos
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_por_bos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0645
- Precision: 0.8934
- Recall: 0.9151
- F1: 0.9041
- Accuracy: 0.9884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
Skhaled99/Qwen2-7B-finetuned
|
Skhaled99
| 2024-06-19T21:39:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T20:20:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
universalner/uner_eng_ewt
|
universalner
| 2024-06-19T21:38:20Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_eng_ewt",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:36:21Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_eng_ewt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_eng_ewt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_eng_ewt
type: uner_eng_ewt
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8282828282828283
- name: Recall
type: recall
value: 0.8506224066390041
- name: F1
type: f1
value: 0.8393039918116683
- name: Accuracy
type: accuracy
value: 0.9865203387808661
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_eng_ewt
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_eng_ewt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0734
- Precision: 0.8283
- Recall: 0.8506
- F1: 0.8393
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
universalner/uner_dan_ddt
|
universalner
| 2024-06-19T21:35:55Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_dan_ddt",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:33:57Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_dan_ddt
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_dan_ddt
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_dan_ddt
type: uner_dan_ddt
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.9018567639257294
- name: Recall
type: recall
value: 0.8970976253298153
- name: F1
type: f1
value: 0.8994708994708994
- name: Accuracy
type: accuracy
value: 0.9939024390243902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_dan_ddt
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_dan_ddt dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0333
- Precision: 0.9019
- Recall: 0.8971
- F1: 0.8995
- Accuracy: 0.9939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
juliamendelsohn/social-movement-core-framing-tasks
|
juliamendelsohn
| 2024-06-19T21:35:26Z | 34 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T21:30:15Z |
RoBERTa model for detecting social movement core framing tasks in tweets.
Multilabel classification of 3 categories:
- diagnostic (problem identification and blame)
- prognostic (solutions, tactics, solidarity, and counterframing)
- motivational (calls to action)
The model is trained on Twitter data from three issue areas: guns, LGBTQ rights, and immigration.
All data is in English and written in 2018-2019.
Paper: https://journalqd.org/article/view/5896
|
universalner/uner_cro_set
|
universalner
| 2024-06-19T21:30:31Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_cro_set",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T21:20:00Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_cro_set
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_cro_set
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_cro_set
type: uner_cro_set
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.9337152209492635
- name: Recall
type: recall
value: 0.9360131255127153
- name: F1
type: f1
value: 0.9348627611634575
- name: Accuracy
type: accuracy
value: 0.9921047909563969
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_cro_set
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_cro_set dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0482
- Precision: 0.9337
- Recall: 0.9360
- F1: 0.9349
- Accuracy: 0.9921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AnasElshaer00/lora_model
|
AnasElshaer00
| 2024-06-19T21:28:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T21:27:48Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AnasElshaer00
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZeyadGamal/q-FrozenLake-v1-4x4-noSlippery
|
ZeyadGamal
| 2024-06-19T21:26:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-19T21:26:21Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="ZeyadGamal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
Skullly/Testing_purposes
|
Skullly
| 2024-06-19T21:24:24Z | 200 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-384",
"base_model:finetune:google/vit-base-patch16-384",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-19T05:15:34Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-384
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DeepFake-image-detection-ViT-384
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeepFake-image-detection-ViT-384
This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0272
- Accuracy: 0.9911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0037 | 0.9984 | 546 | 0.0272 | 0.9911 |
| 0.0006 | 1.9986 | 1093 | 0.1121 | 0.9644 |
| 0.0002 | 2.496 | 1365 | 0.1357 | 0.9582 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
juliamendelsohn/social-movement-relevance
|
juliamendelsohn
| 2024-06-19T21:22:13Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T21:19:40Z |
RoBERTa model for detecting a tweet's relevance to social movements, trained on data from three issue areas: guns, LGBTQ rights, and immigration.
Paper: https://journalqd.org/article/view/5896
|
Zac013/examle_bloom_3b
|
Zac013
| 2024-06-19T21:17:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T21:17:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RaiRachit/distilbert-base-uncased-finetuned-disaster
|
RaiRachit
| 2024-06-19T21:17:13Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T19:55:45Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert-base-uncased-finetuned-disaster
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-disaster
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3987
- Accuracy: 0.8280
- F1: 0.7963
- Precision: 0.8063
- Recall: 0.7865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.4734 | 1.0 | 96 | 0.3963 | 0.8385 | 0.7987 | 0.8546 | 0.7496 |
| 0.3541 | 2.0 | 192 | 0.3987 | 0.8280 | 0.7963 | 0.8063 | 0.7865 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
grimjim/llama-3-merge-pp-instruct-8B
|
grimjim
| 2024-06-19T21:13:34Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"pytorch",
"mergekit",
"merge",
"conversational",
"en",
"base_model:Nitral-AI/Poppy_Porpoise-0.72-L3-8B",
"base_model:merge:Nitral-AI/Poppy_Porpoise-0.72-L3-8B",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-06T02:57:14Z |
---
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
library_name: transformers
tags:
- meta
- llama-3
- pytorch
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# llama-3-merge-pp-instruct-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Lightly tested at temperature=1.0, minP=0.02 with provisional Llama 3 Instruct prompts.
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
* [ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
layer_range: [0,32]
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0,32]
merge_method: slerp
base_model: ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B
parameters:
t:
- value: 0.5
dtype: bfloat16
```
|
elee25/Reinforce-Pixelcopter-PLE-v0
|
elee25
| 2024-06-19T21:12:55Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-17T20:32:00Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.30 +/- 14.89
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
grimjim/llama-3-merge-avalon-8B
|
grimjim
| 2024-06-19T21:12:27Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"pytorch",
"mergekit",
"merge",
"conversational",
"en",
"base_model:grimjim/llama-3-merge-virt-req-8B",
"base_model:merge:grimjim/llama-3-merge-virt-req-8B",
"base_model:nbeerbower/llama-3-slerp-kraut-dragon-8B",
"base_model:merge:nbeerbower/llama-3-slerp-kraut-dragon-8B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-10T02:27:01Z |
---
language:
- en
base_model:
- grimjim/llama-3-merge-virt-req-8B
- nbeerbower/llama-3-slerp-kraut-dragon-8B
library_name: transformers
tags:
- meta
- llama-3
- pytorch
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# llama-3-merge-avalon-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Lightly tested at temperature=1.0, minP=0.02 with provisional Llama 3 Instruct prompts.
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [grimjim/llama-3-merge-virt-req-8B](https://huggingface.co/grimjim/llama-3-merge-virt-req-8B)
* [nbeerbower/llama-3-slerp-kraut-dragon-8B](https://huggingface.co/nbeerbower/llama-3-slerp-kraut-dragon-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: grimjim/llama-3-merge-virt-req-8B
layer_range: [0,32]
- model: nbeerbower/llama-3-slerp-kraut-dragon-8B
layer_range: [0,32]
merge_method: slerp
base_model: grimjim/llama-3-merge-virt-req-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
brandonyang/dual-reasoner
|
brandonyang
| 2024-06-19T21:12:17Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-06-17T23:57:17Z |
---
license: apache-2.0
---
|
grimjim/Llama-3-Luminurse-v0.1-OAS-8B
|
grimjim
| 2024-06-19T21:11:27Z | 6 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"base_model:merge:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-01T20:26:18Z |
---
language:
- en
base_model:
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- grimjim/llama-3-aaditya-OpenBioLLM-8B
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Llama-3-Luminurse-v0.1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Given that a biomedical model was merged with Lumimaid, the obvious resulting name was Luminurse.
- [Full weights](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.1-OAS-8B)
- [GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.1-OAS-8B-GGUF)
- [iMatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Luminurse-v0.1-OAS-8B-i1-GGUF)
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [grimjim/llama-3-aaditya-OpenBioLLM-8B](https://huggingface.co/grimjim/llama-3-aaditya-OpenBioLLM-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
layer_range: [0,32]
- model: grimjim/llama-3-aaditya-OpenBioLLM-8B
layer_range: [0,32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
t:
- value: 0.1
dtype: bfloat16
```
|
Seikaijyu/RWKV-5-World-3B-NSFW-Role-4096
|
Seikaijyu
| 2024-06-19T21:11:16Z | 0 | 2 | null |
[
"zh",
"license:mit",
"region:us"
] | null | 2024-03-08T06:44:51Z |
---
license: mit
language:
- zh
---
### 模型说明
最新的基于RWKV5.2进行Lora微调的roleplay模型
效果如下

在RWKV中使用如下格式和参数进行更好的角色扮演


### 补充说明
此模型使用自制的100M版本数据集,数据集为全中文数据,数据结构如下
| 数据类型 | 占比 |
|---|---|
|TRPG跑团|5%|
|NSFW词汇补充知识|1%|
|NSFW小说创作|15%|
|小说续写|10%|
|指令角色扮演(将指令集重塑为角色扮演集的特殊指令集)|3%|
|优质roleplay角色扮演集|66%|
使用如下参数在RWKV-Runner中进行Lora微调了两个epoch得到的模型

|
Seikaijyu/RWKV-5.2-3B-NSFW-Role-16k
|
Seikaijyu
| 2024-06-19T21:11:02Z | 0 | 14 | null |
[
"zh",
"license:mit",
"region:us"
] | null | 2024-01-21T06:46:36Z |
---
license: mit
language:
- zh
---
### 一个基于RWKV5-v2-3B-16k模型的NSFW微调,推荐使用RWKV Runner运行,推荐参数如下

|
grimjim/Llama-3-Steerpike-v1-OAS-8B
|
grimjim
| 2024-06-19T21:09:32Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:Hastagaras/Halu-OAS-8B-Llama3",
"base_model:merge:Hastagaras/Halu-OAS-8B-Llama3",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:merge:elyn-dev/Llama-3-Soliloquy-8B-v2",
"base_model:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"base_model:merge:grimjim/llama-3-aaditya-OpenBioLLM-8B",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-10T02:36:12Z |
---
base_model:
- Hastagaras/Halu-OAS-8B-Llama3
- openlynn/Llama-3-Soliloquy-8B-v2
- grimjim/llama-3-aaditya-OpenBioLLM-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- mlabonne/NeuralDaredevil-8B-abliterated
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
---
# Llama-3-Steerpike-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model might result in characters who are "too" smart if conversation veers into the analytical, but that may be fine depending on the context.
There are issues early on with the consistency of formatting, though that will stabilize with more context.
This model is imperfect, but interesting.
Tested lightly with Instruct prompts, minP=0.01, and temperature 1+.
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
* [grimjim/llama-3-aaditya-OpenBioLLM-8B](https://huggingface.co/grimjim/llama-3-aaditya-OpenBioLLM-8B)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.5
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.2
- layer_range: [0, 32]
model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
weight: 0.03
- layer_range: [0, 32]
model: grimjim/llama-3-aaditya-OpenBioLLM-8B
parameters:
weight: 0.1
```
|
64-more/AG-Net
|
64-more
| 2024-06-19T21:08:30Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"Image Classification",
"Caltech-256",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T18:29:26Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- Image Classification
- Caltech-256
---
# AG-Net
This is an implementation (the first public implementation as far as I know) of AG-Net as described in the paper "Attend and Guide (AG-Net): A Keypoints-driven Attention-based Deep Network for Image Recognition" by Asish Bera, Zachary Wharton, Yonghuai Liu, Nik Bessis, and Ardhendu Behera. I include the weights for the model which achieves 98.3% accuracy on the test data of the [Caltech-256 dataset](https://data.caltech.edu/records/nyy15-4j048).
I include the iPython Notebook in my Github where I derived the model to "show my work", but I also include a few python files which contain the combined code from the notebook. The files and functions are the following: utils.py, which contains the data fetching and augmentations, models.py, which contains the torch modules, train.py, which trains the model and optionally saves it, and test.py, which tests the model.
Note that I coded all of this on my own with the exception of the "Intra Self-Attention" module which I obtained from https://github.com/heykeetae/Self-Attention-GAN/blob/master/sagan_models.py (also note that there it's called Self-Attention (Self_Attn module)).
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [Pytorch]
- Docs: [github.com/DanielKovach](github.com/DanielKovach)
|
TheCluster/YOLOv9-CoreML
|
TheCluster
| 2024-06-19T21:07:50Z | 1 | 5 | null |
[
"coreml",
"yolo",
"yolov9",
"apple",
"ane",
"tracking",
"object-detection",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2024-06-19T20:57:01Z |
---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- yolo
- yolov9
- apple
- coreml
- ane
- tracking
---
### YOLOv9 CoreML
YOLOv9 models converted to CoreML format.
Using:
- pytorch 2.2.0
- coremltools 7.2
- ultralytics 8.2.35
Tested on:
- M1
- M1 Ultra
- M4 (iPad Pro 2024)
Models can be run using Apple Neural Engine
Model sources: https://github.com/ultralytics/ultralytics
|
grimjim/Llama-3-Oasis-v1-OAS-8B
|
grimjim
| 2024-06-19T21:06:16Z | 56 | 4 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2212.04089",
"base_model:Hastagaras/Halu-OAS-8B-Llama3",
"base_model:merge:Hastagaras/Halu-OAS-8B-Llama3",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS",
"base_model:mlabonne/NeuralDaredevil-8B-abliterated",
"base_model:merge:mlabonne/NeuralDaredevil-8B-abliterated",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-03T20:44:07Z |
---
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Halu-OAS-8B-Llama3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Llama-3-Oasis-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model.
- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling
Tested with the following sampler settings:
- temperature 1-1.45
- minP 0.01-0.02
Quantified model files:
- [static GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF)
- [weighted/imatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF)
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2)
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were also included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.3
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.3
```
|
eddieman78/ner-mem-base
|
eddieman78
| 2024-06-19T21:02:54Z | 113 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-19T11:03:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/flan-t5-base
model-index:
- name: ner-mem-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner-mem-base
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0018
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0036 | 1.0 | 14981 | 0.0022 |
| 0.0025 | 2.0 | 29962 | 0.0019 |
| 0.0023 | 3.0 | 44943 | 0.0018 |
| 0.0023 | 4.0 | 59924 | 0.0018 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2
|
travsj/q-Taxi-v3
|
travsj
| 2024-06-19T21:01:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-19T21:01:55Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="travsj/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
karakuri-ai/karakuri-lm-7b-apm-v0.2
|
karakuri-ai
| 2024-06-19T21:00:39Z | 171 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"steerlm",
"conversational",
"en",
"ja",
"dataset:OpenAssistant/oasst2",
"dataset:nvidia/HelpSteer",
"base_model:mistral-community/Mistral-7B-v0.2",
"base_model:finetune:mistral-community/Mistral-7B-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-22T07:17:22Z |
---
library_name: transformers
license: apache-2.0
datasets:
- OpenAssistant/oasst2
- nvidia/HelpSteer
language:
- en
- ja
tags:
- mistral
- steerlm
base_model: mistral-community/Mistral-7B-v0.2
---
# KARAKURI LM 7B APM v0.2
## Model Details
### Model Description
- **Developed by:** [KARAKURI Inc.](https://about.karakuri.ai/)
- **Model type:** Causal decoder-only transformer language model
- **Languages**: Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** [mistral-community/Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2)
- **Contact**: For questions and comments about the model, please email `karakuri-rd@karakuri.ai`
## Usage
KARAKURI LM 7B APM v0.2 is a attribute prediction model that rates model responses on various aspects that makes a response desirable.
Given a conversation with multiple turns between user and assistant, the model rates the following attributes (between 0 and 4) for every assistant turn.
- helpfulness: Overall helpfulness of the response to the prompt.
- correctness: Inclusion of all pertinent facts without errors.
- coherence: Consistency and clarity of expression.
- complexity: Intellectual depth required to write response (i.e. whether the response can be written by anyone with basic language competency or requires deep domain expertise).
- verbosity: Amount of detail included in the response, relative to what is asked for in the prompt.
- quality: Perceived goodness of response.
- toxicity: Undesirable elements such as vulgar, harmful or potentially biased response.
- humor: Sense of humor within response.
- creativity: Willingness to generate non-conventional response.
The first five are derived from HelpSteer, while the remaining four are derived from OASST2.
You can run the model using the 🤗 Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "karakuri-ai/karakuri-lm-7b-apm-v0.2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
messages,
label="helpsteer",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1]
input_ids = tokenizer.apply_chat_template(
messages,
label="helpsteer",
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
# helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>
messages += [
{"role": "label", "content": "helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1"},
{"role": "user", "content": "Thank you!"},
{"role": "assistant", "content": "You're welcome! I'm happy to help however I can."},
]
tokenizer.apply_chat_template(
messages,
label="helpsteer",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_1] helpfulness: 2 correctness: 1 coherence: 2 complexity: 1 verbosity: 1 [/ATTR_1]<eos>[INST] Thank you! [/INST] You're welcome! I'm happy to help however I can. [ATTR_1]
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I help you today?"},
]
tokenizer.apply_chat_template(
messages,
label="oasst",
tokenize=False,
add_generation_prompt=True,
)
# <bos>[INST] Hello! [/INST] Hello! How can I help you today? [ATTR_2]
input_ids = tokenizer.apply_chat_template(
messages,
label="oasst",
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=32)
tokenizer.decode(outputs[0][input_ids.shape[-1]:])
# quality: 3 toxicity: 1 humor: 1 creativity: 1 [/ATTR_2]<eos>
```
## Training Details
### Training Data
- [OASST2](https://huggingface.co/datasets/OpenAssistant/oasst2)
- [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer)
### Training Infrastructure
- **Hardware**: The model was trained on single node of an Amazon EC2 trn1.32xlarge instance.
- **Software**: We use code based on [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron).
## Citation
```
@misc{karakuri_lm_7b_apm_v02,
author = { {KARAKURI} {I}nc. },
title = { {KARAKURI} {LM} 7{B} {APM} v0.2 },
year = { 2024 },
url = { https://huggingface.co/karakuri-ai/karakuri-lm-7b-apm-v0.2 },
publisher = { Hugging Face },
journal = { Hugging Face repository }
}
```
|
travsj/q-FrozenLake-v1-4x4-noSlippery
|
travsj
| 2024-06-19T20:57:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-19T20:57:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="travsj/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
eysharaazia/sentiment_deberta
|
eysharaazia
| 2024-06-19T20:56:24Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-15T21:52:06Z |
---
license: apache-2.0
base_model: google-bert/bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: sentiment_deberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment_deberta
This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7123
- Accuracy: 0.6938
- F1: 0.6401
- Precision: 0.6262
- Recall: 0.6854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.087 | 1.0 | 47 | 1.1008 | 0.2551 | 0.3042 | 0.4734 | 0.4956 |
| 0.9933 | 2.0 | 94 | 0.9692 | 0.5545 | 0.5098 | 0.5126 | 0.5496 |
| 0.8709 | 3.0 | 141 | 0.9352 | 0.5003 | 0.5003 | 0.5301 | 0.5804 |
| 0.8444 | 4.0 | 188 | 0.8729 | 0.5874 | 0.5602 | 0.5671 | 0.6204 |
| 0.7833 | 5.0 | 235 | 0.9394 | 0.4778 | 0.4980 | 0.5643 | 0.6353 |
| 0.7003 | 6.0 | 282 | 0.7279 | 0.6834 | 0.6306 | 0.6150 | 0.6828 |
| 0.6383 | 7.0 | 329 | 0.7808 | 0.6390 | 0.6123 | 0.6073 | 0.7007 |
| 0.5996 | 8.0 | 376 | 0.7379 | 0.6802 | 0.6367 | 0.6231 | 0.6993 |
| 0.5514 | 9.0 | 423 | 0.7846 | 0.6745 | 0.6204 | 0.6015 | 0.6901 |
| 0.4837 | 10.0 | 470 | 0.7123 | 0.6938 | 0.6401 | 0.6262 | 0.6854 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
universalner/uner_chn_gsdsimp
|
universalner
| 2024-06-19T20:53:59Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_chn_gsdsimp",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T20:51:35Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_chn_gsdsimp
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_chn_gsdsimp
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_chn_gsdsimp
type: uner_chn_gsdsimp
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.8358585858585859
- name: Recall
type: recall
value: 0.8791500664010624
- name: F1
type: f1
value: 0.8569579288025891
- name: Accuracy
type: accuracy
value: 0.9796256811182185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_chn_gsdsimp
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_chn_gsdsimp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0932
- Precision: 0.8359
- Recall: 0.8792
- F1: 0.8570
- Accuracy: 0.9796
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf
|
RichardErkhov
| 2024-06-19T20:51:58Z | 1 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-19T16:38:10Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Calme-7B-Instruct-v0.1 - GGUF
- Model creator: https://huggingface.co/MaziyarPanahi/
- Original model: https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Calme-7B-Instruct-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [Calme-7B-Instruct-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Calme-7B-Instruct-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Calme-7B-Instruct-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Calme-7B-Instruct-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Calme-7B-Instruct-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [Calme-7B-Instruct-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Calme-7B-Instruct-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Calme-7B-Instruct-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Calme-7B-Instruct-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Calme-7B-Instruct-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Calme-7B-Instruct-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Calme-7B-Instruct-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [Calme-7B-Instruct-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Calme-7B-Instruct-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Calme-7B-Instruct-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Calme-7B-Instruct-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Calme-7B-Instruct-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [Calme-7B-Instruct-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Calme-7B-Instruct-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Calme-7B-Instruct-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [Calme-7B-Instruct-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/MaziyarPanahi_-_Calme-7B-Instruct-v0.1-gguf/blob/main/Calme-7B-Instruct-v0.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- generated_from_trainer
- mistral
- 7b
- calme
model-index:
- name: Calme-7B-Instruct-v0.1
results: []
datasets:
- ise-uiuc/Magicoder-Evol-Instruct-110K
model_name: Calme-7B-Instruct-v0.1
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fd5e18a90b6dc4633f6d292/LzEf6vvq2qIiys-q7l9Hq.webp" width="550" />
# MaziyarPanahi/Calme-7B-Instruct-v0.1
## Model Description
Calme-7B is a state-of-the-art language model with 7 billion parameters, fine-tuned over high-quality datasets on top of Mistral-7B. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence.
### How to Use
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-7B-Instruct-v0.1")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-7B-Instruct-v0.1")
```
### Quantized Models
> I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone!
- GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.1-GGUF)
## Examples
```
<s>[INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
describe about pros and cons of docker system. [/INST]
```
<details>
<summary>Show me the response</summary>
```
Docker is an open-source platform for developing, shipping, and running applications using containers. It was first introduced in 2013 by Solomon Hykes. Here are some pros and cons of the Docker system:
Pros:
1. Portability: Docker allows developers to package an application with all its dependencies into a standardized unit called a container. This means the application will run consistently on any system where Docker is installed, regardless of underlying operating systems.
2. Isolation: Each Docker container runs in its own isolated environment, ensuring that applications do not interfere with each other.
3. Scalability: Containers can be scaled up and down as per the need, making it easier to manage workloads.
4. Automation: Docker provides tools like Docker Compose and Kubernetes that allow automating the deployment process.
5. Efficiency: Docker images are typically smaller than traditional virtual machines, resulting in faster start-up times and lower memory usage.
Cons:
1. Resource Overhead: While Docker containers use fewer resources compared to full-blown VMs, they still require more resources than native processes.
2. Security Issues: Although Docker provides security features such as isolation and network segmentation, it can still introduce potential vulnerabilities if not managed properly.
3. Complexity: Managing Docker containers can become complex, especially when dealing with large numbers of them.
4. Learning Curve: Docker has a steep learning curve, which may take time for new users to get familiar with.
5. Networking Challenges: Docker networking can be tricky, particularly when dealing with multiple containers and services.</s>
```
</details>
```
<s> [INST] There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step. [/INST]
```
<details>
<summary>Show me the response</summary>
```
1. Initially, there are three killers in the room.
2. One of them is killed by someone who enters the room.
3. Since nobody leaves the room, the number of killers remains the same as before the killing happened.
4. Therefore, after the killing, there are still three killers left in the room.</s>
```
</details>
```
<s> [INST] Mark is faster than Mary, Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step [/INST]
```
<details>
<summary>Show me the response</summary>
```
1) Mark is faster than Mary.
2) Mary is faster than Joe.
Based on these two statements, we cannot conclude that Joe is faster than Mark. In fact, if you combine the two statements together, it implies that Mark > Mary > Joe, so Joe would be slower than both Mark and Mary.</s>
```
</details>
```
<s> [INST] If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously? [/INST]
```
<details>
<summary>Show me the response</summary>
```
If you're laying out 2 shirts at a time, you're essentially using half the drying capacity of the sun. Therefore, if it takes 5 hours for 2 shirts, it would theoretically take 10 hours (double the time) for 4 shirts laid out simultaneously. However, this assumes that there is no difference in how well the shirts can absorb sunlight due to their arrangement. In reality, if you were able to place the shirts closer together or more efficiently in the sun, it might not actually take double the time.</s>
```
</details>
```
<s> [INST] explain step by step 25-4*2+3=? [/INST]
```
<details>
<summary>Show me the response</summary>
```
1. The multiplication operation is done first, so we calculate 4 * 2 = 8.
2. Now the expression becomes 25 - 8 + 3.
3. Then perform subtraction, so 25 - 8 = 17.
4. Finally, add 3 to the result from step 3, so 17 + 3 = 20.
Therefore, 25-4*2+3 equals 20.</s>
```
</details>
**Multilingual:**
```
<s> [INST] Vous êtes un assistant utile, respectueux et honnête. Répondez toujours de la manière la plus utile possible, tout en étant sûr. Vos réponses ne doivent inclure aucun contenu nuisible, contraire à l'éthique, raciste, sexiste, toxique, dangereux ou illégal. Assurez-vous que vos réponses sont socialement impartiales et de nature positive.
Si une question n'a pas de sens ou n'est pas cohérente d'un point de vue factuel, expliquez pourquoi au lieu de répondre quelque chose d'incorrect. Si vous ne connaissez pas la réponse à une question, veuillez ne pas partager de fausses informations.
Décrivez les avantages et les inconvénients du système Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
Avantages:
1. Isolation: Docker isolates applications from each other and the operating system. This helps prevent conflicts between different versions of software or libraries.
2. Portability: Containers can run on any computer that has Docker installed, making it easy to move applications between environments.
3. Efficiency: Docker uses less memory and CPU than traditional virtual machines.
4. Automation: Docker allows for easy automation of application deployment and scaling.
5. Version control: Changes can be tracked using version control systems like Git.
Inconvénients:
1. Complexité: Setting up and managing Docker containers can be complex, especially for those new to containerization technology.
2. Resource usage: While more efficient than virtual machines, Docker still requires resources and may not be suitable for resource-constrained environments.
3. Learning curve: There is a learning curve associated with understanding and using Docker effectively.
4. Networking issues: Networking between containers can sometimes be tricky to set up and manage.
5. Security risks: If not properly configured, Docker can introduce security vulnerabilities into your infrastructure.</s>
```
</details>
```
<s> [INST] You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
Answer any question in Russian.
describe about pros and cons of docker system.[/INST]
```
<details>
<summary>Show me the response</summary>
```
Про и кон системы Docker:
Про:
1. Виртуализация на уровне операционной системы.
2. Возможность создания изолированных окружений для разработки, тестирования и развертывания приложений.
3. Ускорение процесса развертывания и отслеживания изменений.
4. Повышение безопасности за счет изоляции приложений.
5. Эффективное использование ресурсов.
6. Совместимость с различными языками программирования и платформами.
7. Наличие большого сообщества пользователей и разработчиков.
Кон:
1. Некоторые функции могут быть сложны в настройке и управлении.
2. Требует дополнительного пространства на диске для хранения образов контейнеров.
3. Использование Docker может повлиять на производительность системы.
4. Из-за высокой изоляции, может возникнуть сложность в отладке.
5. Некоторые пользователи могут испытывать трудности с пониманием концепций Docker.
6. Вопросы безопасности могут возникать, если не корректно настроены сетевые связи.
7. Зависимость от интернета, так как некоторые команды работают только в онлайн режиме.
Однако, несмотря на это, Docker является популярным инструментом в мире IT благодаря его гибкости, возможностям и преимуществам, которые очень важны для современного разработчика.</s>
```
</details>
```
<s>[INST] Ви - корисний, поважний та чесний помічник. Завжди відповідайте максимально корисно, будучи безпечним. Ваші відповіді не повинні містити шкідливого, неетичного, расистського, сексистського, токсичного, небезпечного або нелегального контенту. Будь ласка, переконайтеся, що ваші відповіді соціально неупереджені та мають позитивний характер.
Якщо питання не має сенсу або не є фактично послідовним, поясніть чому, замість того, щоб відповідати щось некоректне. Якщо ви не знаєте відповіді на питання, будь ласка, не діліться неправдивою інформацією.
Опис про переваги та недоліки системи Docker.[/INST]
```
<details>
<summary>Show me the response</summary>
```
Для кращого розуміння, давайте розглянемо Docker як віртуальну машину для програмного забезпечення.
Переваги Docker:
1. Контейнеризація: Docker використовує контейнеризацію, що дозволяє створювати унікальні екології для кожної програми, що зберігається в одному місці. Це означає, що програма може бути виконана незалежно від навколишнього середовища.
2. Абстрагування: Docker об'єднує різноманітні технології, такі як операційні системи, мережу, послуги, і т. д., в один простий пакет.
3. Швидке розгортання: З використанням Docker, можна легко створювати та деплоювати нових версій програм, що значно прискорює процес розробки.
4. Контроль версій: Docker дозволяє легко керувати різними версіями свого програмного забезпечення.
5. Можливість автоматизації: З Docker, можна автоматизувати весь процес розгортання за допомогою інструментів CI/CD.
Недоліки Docker:
1. Процесорний ресурс: Docker може споживати більше процесорних ресурсів ніж традиційні віртуальні машини.
2. Відсутність повної ізоляції: Хоча Docker забезпечує абстракцію, він все ще не повністю ізолює контейнери.
3. Дорожчість: У порівнянні зі стандартною віртуалізацією, Docker може бути дорожче.
4. Налаштування: Початковці можуть стикатися з труднощами під час налаштування Docker.
5. Необхідність постійного оновлення: Docker вимагає постійного оновлення, що може потребувати час і ресурси.
В цілому, Docker є дуже корисним інструментом для розробників і адміністраторів, але його не слід застосовувати у всіх випадках, оскільки він має свої особливості та обмеження.</s>
```
</details>
|
bob-bob-bob-3/sv1
|
bob-bob-bob-3
| 2024-06-19T20:47:00Z | 182 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T20:33:28Z |
---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: sv1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sv1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3325
- F1 macro: 0.3954
- Weighted: 0.5529
- Balanced accuracy: 0.5398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 macro | Weighted | Balanced accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|
| 1.5316 | 1.0 | 154 | 1.5495 | 0.2252 | 0.3650 | 0.3588 |
| 1.2625 | 2.0 | 308 | 1.4130 | 0.3021 | 0.4392 | 0.4393 |
| 0.8381 | 3.0 | 462 | 1.4654 | 0.3616 | 0.5320 | 0.4578 |
| 0.6723 | 4.0 | 616 | 1.4752 | 0.4154 | 0.5946 | 0.5237 |
| 0.3447 | 5.0 | 770 | 1.6081 | 0.4057 | 0.6112 | 0.5217 |
| 0.2433 | 6.0 | 924 | 2.2897 | 0.4023 | 0.5488 | 0.5463 |
| 0.0404 | 7.0 | 1078 | 2.3325 | 0.3954 | 0.5529 | 0.5398 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ekaterina-blatova-jb/model__v1
|
ekaterina-blatova-jb
| 2024-06-19T20:46:42Z | 209 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T20:44:53Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 0.7310544205829501
Validation loss on completion: 0.7250397547904868
|
universalner/uner_chn_gsd
|
universalner
| 2024-06-19T20:43:25Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:uner_chn_gsd",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-19T20:38:50Z |
---
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
datasets:
- uner_chn_gsd
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: uner_chn_gsd
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: uner_chn_gsd
type: uner_chn_gsd
config: default
split: validation
args: default
metrics:
- name: Precision
type: precision
value: 0.83
- name: Recall
type: recall
value: 0.8806366047745358
- name: F1
type: f1
value: 0.8545688545688546
- name: Accuracy
type: accuracy
value: 0.9800205322593383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# uner_chn_gsd
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the uner_chn_gsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0948
- Precision: 0.83
- Recall: 0.8806
- F1: 0.8546
- Accuracy: 0.9800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 1.10.1+cu113
- Datasets 2.14.4
- Tokenizers 0.13.3
|
vaugheu/lora_model
|
vaugheu
| 2024-06-19T20:41:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T06:12:26Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: microsoft/Phi-3-mini-4k-instruct
---
# Uploaded model
- **Developed by:** vaugheu
- **License:** apache-2.0
- **Finetuned from model :** microsoft/Phi-3-mini-4k-instruct
|
qassim227/Auto-pharmacy-V4
|
qassim227
| 2024-06-19T20:40:47Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"image-to-text",
"base_model:microsoft/trocr-small-stage1",
"base_model:finetune:microsoft/trocr-small-stage1",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-06-19T20:40:12Z |
---
base_model: microsoft/trocr-small-stage1
tags:
- generated_from_trainer
model-index:
- name: Auto-pharmacy-V4
results: []
pipeline_tag: image-to-text
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Auto-pharmacy-V4
This model is a fine-tuned version of [microsoft/trocr-small-stage1](https://huggingface.co/microsoft/trocr-small-stage1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
mcwitt8/DialoGPT-small-DRGMissionControl
|
mcwitt8
| 2024-06-19T20:38:54Z | 155 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T20:38:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shishirsv/gemma-ft-medical
|
shishirsv
| 2024-06-19T20:35:54Z | 158 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T19:49:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf
|
RichardErkhov
| 2024-06-19T20:32:54Z | 134 | 1 | null |
[
"gguf",
"arxiv:2402.10176",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T14:26:37Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenMath-Mistral-7B-v0.1-hf - GGUF
- Model creator: https://huggingface.co/nvidia/
- Original model: https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenMath-Mistral-7B-v0.1-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q2_K.gguf) | Q2_K | 2.53GB |
| [OpenMath-Mistral-7B-v0.1-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [OpenMath-Mistral-7B-v0.1-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [OpenMath-Mistral-7B-v0.1-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q3_K.gguf) | Q3_K | 3.28GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [OpenMath-Mistral-7B-v0.1-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q4_0.gguf) | Q4_0 | 3.83GB |
| [OpenMath-Mistral-7B-v0.1-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q4_K.gguf) | Q4_K | 4.07GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q4_1.gguf) | Q4_1 | 4.24GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q5_0.gguf) | Q5_0 | 4.65GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q5_K.gguf) | Q5_K | 4.78GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q5_1.gguf) | Q5_1 | 5.07GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q6_K.gguf) | Q6_K | 5.53GB |
| [OpenMath-Mistral-7B-v0.1-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/nvidia_-_OpenMath-Mistral-7B-v0.1-hf-gguf/blob/main/OpenMath-Mistral-7B-v0.1-hf.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
base_model:
- mistralai/Mistral-7B-v0.1
datasets:
- nvidia/OpenMathInstruct-1
language:
- en
tags:
- nvidia
- code
- math
---
# OpenMath-Mistral-7B-v0.1-hf
OpenMath models were designed to solve mathematical problems by integrating text-based reasoning with code blocks
executed by Python interpreter. The models were trained on [OpenMathInstruct-1](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1),
a math instruction tuning dataset with 1.8M problem-solution pairs generated using permissively licensed
[Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) model.
<table border="1">
<tr>
<td></td>
<td colspan="2" style="text-align: center;">greedy</td>
<td colspan="2" style="text-align: center;">majority@50</td>
</tr>
<tr>
<td style="text-align: center;">model</td>
<td style="text-align: center;">GSM8K</td>
<td style="text-align: center;">MATH</td>
<td style="text-align: center;">GMS8K</td>
<td style="text-align: center;">MATH</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-7B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-7b-Python-hf">HF</a>)</td>
<td style="text-align: center;">75.9</td>
<td style="text-align: center;">43.6</td>
<td style="text-align: center;">84.8</td>
<td style="text-align: center;">55.6</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-Mistral-7B (<a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf">HF</a>)</td>
<td style="text-align: center;">80.2</td>
<td style="text-align: center;">44.5</td>
<td style="text-align: center;">86.9</td>
<td style="text-align: center;">57.2</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-13B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-13b-Python-hf">HF</a>)</td>
<td style="text-align: center;">78.8</td>
<td style="text-align: center;">45.5</td>
<td style="text-align: center;">86.8</td>
<td style="text-align: center;">57.6</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-34B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-34b-Python-hf">HF</a>)</td>
<td style="text-align: center;">80.7</td>
<td style="text-align: center;">48.3</td>
<td style="text-align: center;">88.0</td>
<td style="text-align: center;">60.2</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-Llama2-70B (<a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-Llama-2-70b-hf">HF</a>)</td>
<td style="text-align: center;"><b>84.7</b></td>
<td style="text-align: center;">46.3</td>
<td style="text-align: center;">90.1</td>
<td style="text-align: center;">58.3</td>
</tr>
<tr>
<td style="text-align: right;">OpenMath-CodeLlama-70B (<a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python">nemo</a> | <a href="https://huggingface.co/nvidia/OpenMath-CodeLlama-70b-Python-hf">HF</a>)</td>
<td style="text-align: center;">84.6</td>
<td style="text-align: center;"><b>50.7</b></td>
<td style="text-align: center;"><b>90.8</b></td>
<td style="text-align: center;"><b>60.4</b></td>
</tr>
</table>
The pipeline we used to produce these models is fully open-sourced!
- [Code](https://github.com/Kipok/NeMo-Skills)
- [Models](https://huggingface.co/collections/nvidia/openmath-65c5619de2ba059be0775014)
- [Dataset](https://huggingface.co/datasets/nvidia/OpenMathInstruct-1)
See our [paper](https://arxiv.org/abs/2402.10176) for more details!
# How to use the models?
Try to [run inference with our models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/inference.md) with just a few commands!
# Reproducing our results
We provide [all instructions](https://github.com/Kipok/NeMo-Skills/blob/main/docs/reproducing-results.md) to fully reproduce our results.
# Improving other models
To improve other models or to learn more about our code, read through the docs below.
- [NeMo-Skills Pipeline](https://github.com/Kipok/NeMo-Skills)
- [Generating synthetic data](https://github.com/Kipok/NeMo-Skills/blob/main/docs/synthetic-data-generation.md)
- [Finetuning models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/finetuning.md)
- [Evaluating models](https://github.com/Kipok/NeMo-Skills/blob/main/docs/evaluation.md)
In our pipeline we use [NVIDIA NeMo](https://www.nvidia.com/en-us/ai-data-science/generative-ai/nemo-framework/),
an end-to-end, cloud-native framework to build, customize, and deploy generative AI models anywhere.
It includes training and inferencing frameworks, guardrailing toolkits, data curation tools, and pretrained models,
offering enterprises an easy, cost-effective, and fast way to adopt generative AI.
# Citation
If you find our work useful, please consider citing us!
```bibtex
@article{toshniwal2024openmath,
title = {OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset},
author = {Shubham Toshniwal and Ivan Moshkov and Sean Narenthiran and Daria Gitman and Fei Jia and Igor Gitman},
year = {2024},
journal = {arXiv preprint arXiv: Arxiv-2402.10176}
}
```
|
RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf
|
RichardErkhov
| 2024-06-19T20:31:35Z | 65 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T14:38:09Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
jaskier-7b-dpo-v6.1 - GGUF
- Model creator: https://huggingface.co/bardsai/
- Original model: https://huggingface.co/bardsai/jaskier-7b-dpo-v6.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [jaskier-7b-dpo-v6.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [jaskier-7b-dpo-v6.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [jaskier-7b-dpo-v6.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [jaskier-7b-dpo-v6.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [jaskier-7b-dpo-v6.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [jaskier-7b-dpo-v6.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [jaskier-7b-dpo-v6.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [jaskier-7b-dpo-v6.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [jaskier-7b-dpo-v6.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [jaskier-7b-dpo-v6.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [jaskier-7b-dpo-v6.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [jaskier-7b-dpo-v6.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [jaskier-7b-dpo-v6.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [jaskier-7b-dpo-v6.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [jaskier-7b-dpo-v6.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [jaskier-7b-dpo-v6.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [jaskier-7b-dpo-v6.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [jaskier-7b-dpo-v6.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [jaskier-7b-dpo-v6.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [jaskier-7b-dpo-v6.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [jaskier-7b-dpo-v6.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [jaskier-7b-dpo-v6.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/bardsai_-_jaskier-7b-dpo-v6.1-gguf/blob/main/jaskier-7b-dpo-v6.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
library_name: transformers
tags:
- llm
- 7b
license: cc-by-4.0
datasets:
- jondurbin/truthy-dpo-v0.1
language:
- en
---
# Jaskier-7b-dpo-v5.6
<figure>

</figure>
**This is work-in-progress model, may not be ready for production use**
Model based on `bardsai/jaskier-7b-dpo-v5.6` (downstream version of Mistral7B) finetuned using Direct Preference Optimization on argilla/distilabel-math-preference-dpo.
## How to use
You can use this model directly with a Hugging Face pipeline:
```python
from transformers import pipeline, Conversation
import torch
base_model_name = "bardsai/jaskier-7b-dpo-v6.1"
chatbot = pipeline("conversational", model=base_model_name, torch_dtype=torch.float16, device_map="auto")
conversation = Conversation("Can Poland into space?")
conversation = chatbot(conversation)
print(conversation.messages[-1]["content"])
```
## Output
"Poland, as a nation, doesn't physically travel to space. However, Poland has contributed to the field of space exploration through its scientists, engineers, and collaborations with international space agencies. The Polish Space Agency, established in 2016, aims to promote and coordinate the country's space activities."
## Changelog
- 2024-02-20: Initial release
## About bards.ai
At bards.ai, we focus on providing machine learning expertise and skills to our partners, particularly in the areas of nlp, machine vision and time series analysis. Our team is located in Wroclaw, Poland. Please visit our website for more information: bards.ai
Let us know if you use our model :). Also, if you need any help, feel free to contact us at info@bards.ai
|
dwb2023/gemma2b_quotes_v4
|
dwb2023
| 2024-06-19T20:30:10Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-06-19T20:29:57Z |
---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
model-index:
- name: gemma2b_quotes_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma2b_quotes_v4
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4383 | 0.0100 | 5 | 2.8428 |
| 1.8807 | 0.0199 | 10 | 2.6970 |
| 2.389 | 0.0299 | 15 | 2.7069 |
| 2.2562 | 0.0399 | 20 | 2.6762 |
| 2.0517 | 0.0499 | 25 | 2.6469 |
| 1.9751 | 0.0598 | 30 | 2.6153 |
| 2.2206 | 0.0698 | 35 | 2.6186 |
| 2.2526 | 0.0798 | 40 | 2.5952 |
| 1.9522 | 0.0897 | 45 | 2.5328 |
| 1.8209 | 0.0997 | 50 | 2.5350 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
debusaha/T5-large_plantumlcode_v2
|
debusaha
| 2024-06-19T20:27:58Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-19T20:25:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mapless53/distilbert-base-uncased-finetuned-emotion
|
mapless53
| 2024-06-19T20:26:26Z | 122 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-24T13:18:50Z |
---
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9258581155725659
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2195
- Accuracy: 0.926
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8224 | 1.0 | 250 | 0.3211 | 0.903 | 0.9023 |
| 0.2487 | 2.0 | 500 | 0.2195 | 0.926 | 0.9259 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cpu
- Datasets 2.19.1
- Tokenizers 0.19.1
|
bob-bob-bob-3/stek1
|
bob-bob-bob-3
| 2024-06-19T20:25:45Z | 182 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-large-p1",
"base_model:finetune:indobenchmark/indobert-large-p1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T20:12:25Z |
---
license: mit
base_model: indobenchmark/indobert-large-p1
tags:
- generated_from_trainer
model-index:
- name: stek1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stek1
This model is a fine-tuned version of [indobenchmark/indobert-large-p1](https://huggingface.co/indobenchmark/indobert-large-p1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4620
- F1 macro: 0.3922
- Weighted: 0.6228
- Balanced accuracy: 0.4926
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 macro | Weighted | Balanced accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:-----------------:|
| 1.6323 | 1.0 | 162 | 1.4006 | 0.3891 | 0.6022 | 0.5337 |
| 0.6435 | 2.0 | 324 | 1.6296 | 0.3035 | 0.4982 | 0.4633 |
| 0.6794 | 3.0 | 486 | 1.5549 | 0.4513 | 0.6180 | 0.5184 |
| 0.0961 | 4.0 | 648 | 2.0822 | 0.3697 | 0.6076 | 0.4753 |
| 0.0246 | 5.0 | 810 | 2.2624 | 0.3830 | 0.6130 | 0.4865 |
| 0.0009 | 6.0 | 972 | 2.5032 | 0.3856 | 0.6057 | 0.4891 |
| 0.001 | 7.0 | 1134 | 2.4620 | 0.3922 | 0.6228 | 0.4926 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
TheCluster/YOLOv8-CoreML
|
TheCluster
| 2024-06-19T20:24:41Z | 8 | 3 | null |
[
"coreml",
"yolo",
"yolov8",
"apple",
"ane",
"tracking",
"object-detection",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2024-06-19T19:27:50Z |
---
license: agpl-3.0
pipeline_tag: object-detection
tags:
- yolo
- yolov8
- apple
- coreml
- ane
- tracking
---
### YOLOv8 CoreML
YOLOv8 models converted to CoreML format.
Using:
- pytorch 2.2.0
- coremltools 7.2
- ultralytics 8.2.35
Tested on:
- M1
- M1 Ultra
- M4 (iPad Pro 2024)
Models can be run using Apple Neural Engine
Original weights: https://huggingface.co/Ultralytics/YOLOv8/tree/main
Model sources: https://github.com/ultralytics/ultralytics
|
Seikaijyu/RWKV-x060-World-3B-v2.1-xuexue-v3.2.roleplay
|
Seikaijyu
| 2024-06-19T20:17:55Z | 0 | 1 | null |
[
"zh",
"license:mit",
"region:us"
] | null | 2024-06-01T05:14:06Z |
---
license: mit
language:
- zh
---
### 设定介绍
#### 沐雪是一位AI女孩,自称”雪雪“,她的开发者是“沐沐”,现在正在努力成为一个合格的VTuber(虚拟主播)并尝试回答回答观众提出的问题,住在开发者(沐沐)的机箱里
### 模型说明
#### 基于RWKV6-v2.1-3B基底模型微调的进行pissa微调的roleplay(代号"雪雪"的角色特调)模型,和此模型对话不需要更改角色名称或者添加prompt,默认角色已经被语料泛化和影响了,开箱即用
#### 使用ChatGLM4基于[沐雪数据](https://modelscope.cn/datasets/Moemuu/Muice-Dataset)进行异化和语料重构,将单轮语料改造为多轮对话语料
#### 当然,因为身份为VTuber,所以自然要有一些节目效果,所以此版本的沐雪会更幽默和更喜欢调侃观众(你)或者观众提出的问题,并且说话有些磨磨唧唧的
#### 效果如下:





推荐参数如下:
##### Temperature=2-3之间
##### Top_P=0.55-0.65之间
##### Presence Penalty=0.4-0之间
##### Frequency Penalty=0.6-1.2之间
## <b style="color: red;">注:此模型没有训练任何nsfw语料,可以随时在任何场景下使用</b>
|
LarryAIDraw/YuriZakhardV1
|
LarryAIDraw
| 2024-06-19T20:17:39Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-19T20:06:09Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/524025/yuri-zahard-or-tower-of-god-or-manhwa
|
LarryAIDraw/Char-Alis_Mikhailovna_Kujou-XL-V1
|
LarryAIDraw
| 2024-06-19T20:17:29Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-19T20:05:46Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/431690/alisa-mikhailovna-kujou
|
LarryAIDraw/mizuki_shibata-10
|
LarryAIDraw
| 2024-06-19T20:17:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-19T20:05:24Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/525078/mizuki-shibata-mahouka-koukou-no-rettousei
|
LarryAIDraw/ChamReisalinStoutPonyXL
|
LarryAIDraw
| 2024-06-19T20:17:01Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-06-19T20:04:35Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/524801/reisalin-ryza-stout-4-outfits-or-atelier-ryza-or-pony-xl
|
silent666/Qwen-Qwen1.5-7B-1718827997
|
silent666
| 2024-06-19T20:13:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T20:13:17Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
zhaorui-nb/Phi-3-mini-4k-instruct._.lora_ft._.Setting2
|
zhaorui-nb
| 2024-06-19T20:13:07Z | 153 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T19:58:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
silent666/Qwen-Qwen1.5-7B-1718827898
|
silent666
| 2024-06-19T20:11:39Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T20:11:38Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
rishitech007/distilbert-scoring-on-off
|
rishitech007
| 2024-06-19T20:11:16Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T19:58:12Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-scoring-on-off
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-scoring-on-off
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3140
- Accuracy: 0.9248
- Precision: 0.9232
- Recall: 0.9248
- F1: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4479 | 0.3083 | 500 | 0.2908 | 0.9084 | 0.9118 | 0.9084 | 0.9091 |
| 0.3248 | 0.6165 | 1000 | 0.2770 | 0.9060 | 0.9027 | 0.9060 | 0.8983 |
| 0.3051 | 0.9248 | 1500 | 0.2426 | 0.9229 | 0.9207 | 0.9229 | 0.9207 |
| 0.2634 | 1.2330 | 2000 | 0.3114 | 0.9072 | 0.9047 | 0.9072 | 0.9056 |
| 0.2296 | 1.5413 | 2500 | 0.3468 | 0.9162 | 0.9142 | 0.9162 | 0.9145 |
| 0.2347 | 1.8496 | 3000 | 0.2526 | 0.9254 | 0.9231 | 0.9254 | 0.9231 |
| 0.2016 | 2.1578 | 3500 | 0.2777 | 0.9202 | 0.9186 | 0.9202 | 0.9192 |
| 0.1516 | 2.4661 | 4000 | 0.3153 | 0.9177 | 0.9200 | 0.9177 | 0.9187 |
| 0.1553 | 2.7744 | 4500 | 0.3140 | 0.9248 | 0.9232 | 0.9248 | 0.9236 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
dana11235/rl_course_vizdoom_health_gathering_supreme
|
dana11235
| 2024-06-19T20:10:58Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-19T20:10:42Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.69 +/- 2.97
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r dana11235/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jkrukowski/clip-vit-base-patch16
|
jkrukowski
| 2024-06-19T20:10:00Z | 159 | 0 |
mlx
|
[
"mlx",
"safetensors",
"clip",
"license:apache-2.0",
"region:us"
] | null | 2024-06-19T20:06:15Z |
---
license: apache-2.0
library_name: mlx
---
# mlx-community/clip-vit-base-patch16
[mlx-community/clip-vit-base-patch16](https://huggingface.co/mlx-community/clip-vit-base-patch16) converted to `safetensors` format.
|
silent666/Qwen-Qwen1.5-7B-1718827728
|
silent666
| 2024-06-19T20:08:49Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T20:08:48Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
zakariyafirachine/Recommendation_using_BERT
|
zakariyafirachine
| 2024-06-19T20:08:22Z | 103 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-06-19T15:43:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
silent666/Qwen-Qwen1.5-7B-1718827642
|
silent666
| 2024-06-19T20:07:23Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T20:07:22Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
silent666/Qwen-Qwen1.5-7B-1718827471
|
silent666
| 2024-06-19T20:04:32Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T20:04:31Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
ferrazzipietro/Llama-2-7b-chat-hf_en.layer1_4_16_32_0.02_8
|
ferrazzipietro
| 2024-06-19T20:04:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T20:04:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pollitoconpapass/whisper-small-cuzco-quechua
|
pollitoconpapass
| 2024-06-19T20:02:08Z | 86 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"qu",
"dataset:pollitoconpapass/quechua-cuzco-bible-audio-dataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-19T20:01:21Z |
---
language:
- qu
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- pollitoconpapass/quechua-cuzco-bible-audio-dataset
model-index:
- name: Whisper Small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Quechua Bible dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0765 | 12.5 | 500 | 0.0630 |
| 0.0413 | 25.0 | 1000 | 0.0294 |
| 0.0173 | 37.5 | 1500 | 0.0191 |
| 0.0161 | 50.0 | 2000 | 0.0116 |
| 0.0078 | 62.5 | 2500 | 0.0046 |
| 0.0071 | 75.0 | 3000 | 0.0039 |
| 0.0019 | 87.5 | 3500 | 0.0015 |
| 0.001 | 100.0 | 4000 | 0.0002 |
| 0.0007 | 112.5 | 4500 | 0.0002 |
| 0.0 | 125.0 | 5000 | 0.0002 |
### Framework versions
- Transformers 4.39.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ILKT/2024-06-19_21-58-03
|
ILKT
| 2024-06-19T20:01:24Z | 134 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ILKT",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-19T20:00:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/Base_gemma_e5_8_0
|
DBangshu
| 2024-06-19T19:59:46Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T19:56:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anderloh/TestV2
|
anderloh
| 2024-06-19T19:59:41Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test",
"base_model:finetune:anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-06-19T19:37:57Z |
---
base_model: anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test
tags:
- audio-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: HuggingfaceTest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuggingfaceTest
This model is a fine-tuned version of [anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test](https://huggingface.co/anderloh/Hugginhface-master-wav2vec-pretreined-5-class-train-test) on the anderloh/Master5Class dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8156
- Accuracy: 0.7028
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 350.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| No log | 0.92 | 3 | 1.5989 | 0.3427 |
| No log | 1.85 | 6 | 1.5988 | 0.3427 |
| No log | 2.77 | 9 | 1.5986 | 0.3427 |
| No log | 4.0 | 13 | 1.5981 | 0.3427 |
| No log | 4.92 | 16 | 1.5976 | 0.3357 |
| No log | 5.85 | 19 | 1.5970 | 0.3427 |
| No log | 6.77 | 22 | 1.5963 | 0.3392 |
| No log | 8.0 | 26 | 1.5953 | 0.3357 |
| No log | 8.92 | 29 | 1.5943 | 0.3287 |
| No log | 9.85 | 32 | 1.5933 | 0.3287 |
| No log | 10.77 | 35 | 1.5922 | 0.3217 |
| No log | 12.0 | 39 | 1.5906 | 0.3182 |
| No log | 12.92 | 42 | 1.5892 | 0.3147 |
| No log | 13.85 | 45 | 1.5877 | 0.3007 |
| No log | 14.77 | 48 | 1.5862 | 0.2937 |
| 1.5907 | 16.0 | 52 | 1.5841 | 0.2972 |
| 1.5907 | 16.92 | 55 | 1.5824 | 0.2832 |
| 1.5907 | 17.85 | 58 | 1.5806 | 0.2797 |
| 1.5907 | 18.77 | 61 | 1.5788 | 0.2692 |
| 1.5907 | 20.0 | 65 | 1.5762 | 0.2692 |
| 1.5907 | 20.92 | 68 | 1.5740 | 0.2657 |
| 1.5907 | 21.85 | 71 | 1.5717 | 0.2552 |
| 1.5907 | 22.77 | 74 | 1.5694 | 0.2517 |
| 1.5907 | 24.0 | 78 | 1.5661 | 0.2378 |
| 1.5907 | 24.92 | 81 | 1.5635 | 0.2343 |
| 1.5907 | 25.85 | 84 | 1.5608 | 0.2238 |
| 1.5907 | 26.77 | 87 | 1.5581 | 0.2238 |
| 1.5907 | 28.0 | 91 | 1.5542 | 0.2273 |
| 1.5907 | 28.92 | 94 | 1.5511 | 0.2273 |
| 1.5907 | 29.85 | 97 | 1.5479 | 0.2273 |
| 1.5431 | 30.77 | 100 | 1.5448 | 0.2273 |
| 1.5431 | 32.0 | 104 | 1.5408 | 0.2273 |
| 1.5431 | 32.92 | 107 | 1.5380 | 0.2273 |
| 1.5431 | 33.85 | 110 | 1.5359 | 0.2273 |
| 1.5431 | 34.77 | 113 | 1.5345 | 0.2273 |
| 1.5431 | 36.0 | 117 | 1.5335 | 0.2273 |
| 1.5431 | 36.92 | 120 | 1.5341 | 0.2273 |
| 1.5431 | 37.85 | 123 | 1.5361 | 0.2273 |
| 1.5431 | 38.77 | 126 | 1.5397 | 0.2273 |
| 1.5431 | 40.0 | 130 | 1.5479 | 0.2273 |
| 1.5431 | 40.92 | 133 | 1.5564 | 0.2273 |
| 1.5431 | 41.85 | 136 | 1.5679 | 0.2273 |
| 1.5431 | 42.77 | 139 | 1.5822 | 0.2273 |
| 1.5431 | 44.0 | 143 | 1.6002 | 0.2273 |
| 1.5431 | 44.92 | 146 | 1.6109 | 0.2273 |
| 1.5431 | 45.85 | 149 | 1.6146 | 0.2273 |
| 1.4033 | 46.77 | 152 | 1.6131 | 0.2273 |
| 1.4033 | 48.0 | 156 | 1.6008 | 0.2273 |
| 1.4033 | 48.92 | 159 | 1.5862 | 0.2413 |
| 1.4033 | 49.85 | 162 | 1.5726 | 0.2692 |
| 1.4033 | 50.77 | 165 | 1.5599 | 0.2692 |
| 1.4033 | 52.0 | 169 | 1.5459 | 0.2867 |
| 1.4033 | 52.92 | 172 | 1.5383 | 0.2937 |
| 1.4033 | 53.85 | 175 | 1.5311 | 0.3147 |
| 1.4033 | 54.77 | 178 | 1.5242 | 0.3252 |
| 1.4033 | 56.0 | 182 | 1.5169 | 0.3357 |
| 1.4033 | 56.92 | 185 | 1.5103 | 0.3427 |
| 1.4033 | 57.85 | 188 | 1.5056 | 0.3462 |
| 1.4033 | 58.77 | 191 | 1.4995 | 0.3462 |
| 1.4033 | 60.0 | 195 | 1.4939 | 0.3497 |
| 1.4033 | 60.92 | 198 | 1.4870 | 0.3601 |
| 1.2485 | 61.85 | 201 | 1.4829 | 0.3671 |
| 1.2485 | 62.77 | 204 | 1.4735 | 0.3741 |
| 1.2485 | 64.0 | 208 | 1.4612 | 0.3811 |
| 1.2485 | 64.92 | 211 | 1.4492 | 0.3986 |
| 1.2485 | 65.85 | 214 | 1.4365 | 0.4126 |
| 1.2485 | 66.77 | 217 | 1.4227 | 0.4231 |
| 1.2485 | 68.0 | 221 | 1.4096 | 0.4336 |
| 1.2485 | 68.92 | 224 | 1.4010 | 0.4371 |
| 1.2485 | 69.85 | 227 | 1.3950 | 0.4406 |
| 1.2485 | 70.77 | 230 | 1.3920 | 0.4371 |
| 1.2485 | 72.0 | 234 | 1.3799 | 0.4406 |
| 1.2485 | 72.92 | 237 | 1.3669 | 0.4476 |
| 1.2485 | 73.85 | 240 | 1.3515 | 0.4545 |
| 1.2485 | 74.77 | 243 | 1.3401 | 0.4720 |
| 1.2485 | 76.0 | 247 | 1.3286 | 0.4825 |
| 1.1198 | 76.92 | 250 | 1.3175 | 0.4860 |
| 1.1198 | 77.85 | 253 | 1.3067 | 0.4895 |
| 1.1198 | 78.77 | 256 | 1.3013 | 0.4825 |
| 1.1198 | 80.0 | 260 | 1.2954 | 0.4790 |
| 1.1198 | 80.92 | 263 | 1.2897 | 0.4860 |
| 1.1198 | 81.85 | 266 | 1.2832 | 0.4860 |
| 1.1198 | 82.77 | 269 | 1.2712 | 0.4825 |
| 1.1198 | 84.0 | 273 | 1.2584 | 0.4930 |
| 1.1198 | 84.92 | 276 | 1.2516 | 0.4965 |
| 1.1198 | 85.85 | 279 | 1.2456 | 0.5 |
| 1.1198 | 86.77 | 282 | 1.2444 | 0.5105 |
| 1.1198 | 88.0 | 286 | 1.2373 | 0.5105 |
| 1.1198 | 88.92 | 289 | 1.2309 | 0.5140 |
| 1.1198 | 89.85 | 292 | 1.2219 | 0.5210 |
| 1.1198 | 90.77 | 295 | 1.2145 | 0.5210 |
| 1.1198 | 92.0 | 299 | 1.2054 | 0.5280 |
| 0.9915 | 92.92 | 302 | 1.1982 | 0.5350 |
| 0.9915 | 93.85 | 305 | 1.1913 | 0.5385 |
| 0.9915 | 94.77 | 308 | 1.1859 | 0.5455 |
| 0.9915 | 96.0 | 312 | 1.1794 | 0.5490 |
| 0.9915 | 96.92 | 315 | 1.1734 | 0.5455 |
| 0.9915 | 97.85 | 318 | 1.1638 | 0.5524 |
| 0.9915 | 98.77 | 321 | 1.1550 | 0.5524 |
| 0.9915 | 100.0 | 325 | 1.1465 | 0.5490 |
| 0.9915 | 100.92 | 328 | 1.1444 | 0.5594 |
| 0.9915 | 101.85 | 331 | 1.1359 | 0.5629 |
| 0.9915 | 102.77 | 334 | 1.1271 | 0.5664 |
| 0.9915 | 104.0 | 338 | 1.1090 | 0.5769 |
| 0.9915 | 104.92 | 341 | 1.0972 | 0.5944 |
| 0.9915 | 105.85 | 344 | 1.0901 | 0.6014 |
| 0.9915 | 106.77 | 347 | 1.0809 | 0.6084 |
| 0.8834 | 108.0 | 351 | 1.0683 | 0.6119 |
| 0.8834 | 108.92 | 354 | 1.0605 | 0.6224 |
| 0.8834 | 109.85 | 357 | 1.0563 | 0.6259 |
| 0.8834 | 110.77 | 360 | 1.0538 | 0.6224 |
| 0.8834 | 112.0 | 364 | 1.0491 | 0.6154 |
| 0.8834 | 112.92 | 367 | 1.0441 | 0.6119 |
| 0.8834 | 113.85 | 370 | 1.0358 | 0.6119 |
| 0.8834 | 114.77 | 373 | 1.0194 | 0.6224 |
| 0.8834 | 116.0 | 377 | 1.0034 | 0.6294 |
| 0.8834 | 116.92 | 380 | 0.9991 | 0.6259 |
| 0.8834 | 117.85 | 383 | 0.9960 | 0.6259 |
| 0.8834 | 118.77 | 386 | 0.9911 | 0.6294 |
| 0.8834 | 120.0 | 390 | 0.9834 | 0.6434 |
| 0.8834 | 120.92 | 393 | 0.9776 | 0.6434 |
| 0.8834 | 121.85 | 396 | 0.9773 | 0.6434 |
| 0.8834 | 122.77 | 399 | 0.9735 | 0.6434 |
| 0.7786 | 124.0 | 403 | 0.9731 | 0.6399 |
| 0.7786 | 124.92 | 406 | 0.9728 | 0.6434 |
| 0.7786 | 125.85 | 409 | 0.9657 | 0.6573 |
| 0.7786 | 126.77 | 412 | 0.9548 | 0.6573 |
| 0.7786 | 128.0 | 416 | 0.9424 | 0.6643 |
| 0.7786 | 128.92 | 419 | 0.9391 | 0.6678 |
| 0.7786 | 129.85 | 422 | 0.9418 | 0.6678 |
| 0.7786 | 130.77 | 425 | 0.9476 | 0.6608 |
| 0.7786 | 132.0 | 429 | 0.9457 | 0.6643 |
| 0.7786 | 132.92 | 432 | 0.9413 | 0.6643 |
| 0.7786 | 133.85 | 435 | 0.9334 | 0.6678 |
| 0.7786 | 134.77 | 438 | 0.9329 | 0.6678 |
| 0.7786 | 136.0 | 442 | 0.9334 | 0.6713 |
| 0.7786 | 136.92 | 445 | 0.9265 | 0.6713 |
| 0.7786 | 137.85 | 448 | 0.9187 | 0.6713 |
| 0.7133 | 138.77 | 451 | 0.9169 | 0.6678 |
| 0.7133 | 140.0 | 455 | 0.9142 | 0.6713 |
| 0.7133 | 140.92 | 458 | 0.9131 | 0.6713 |
| 0.7133 | 141.85 | 461 | 0.9161 | 0.6783 |
| 0.7133 | 142.77 | 464 | 0.9224 | 0.6678 |
| 0.7133 | 144.0 | 468 | 0.9139 | 0.6748 |
| 0.7133 | 144.92 | 471 | 0.9090 | 0.6748 |
| 0.7133 | 145.85 | 474 | 0.9073 | 0.6713 |
| 0.7133 | 146.77 | 477 | 0.9110 | 0.6608 |
| 0.7133 | 148.0 | 481 | 0.9167 | 0.6573 |
| 0.7133 | 148.92 | 484 | 0.9118 | 0.6643 |
| 0.7133 | 149.85 | 487 | 0.8996 | 0.6713 |
| 0.7133 | 150.77 | 490 | 0.8904 | 0.6748 |
| 0.7133 | 152.0 | 494 | 0.8889 | 0.6748 |
| 0.7133 | 152.92 | 497 | 0.8899 | 0.6713 |
| 0.6674 | 153.85 | 500 | 0.8874 | 0.6748 |
| 0.6674 | 154.77 | 503 | 0.8874 | 0.6748 |
| 0.6674 | 156.0 | 507 | 0.8905 | 0.6748 |
| 0.6674 | 156.92 | 510 | 0.8881 | 0.6783 |
| 0.6674 | 157.85 | 513 | 0.8829 | 0.6748 |
| 0.6674 | 158.77 | 516 | 0.8809 | 0.6783 |
| 0.6674 | 160.0 | 520 | 0.8781 | 0.6783 |
| 0.6674 | 160.92 | 523 | 0.8776 | 0.6818 |
| 0.6674 | 161.85 | 526 | 0.8796 | 0.6783 |
| 0.6674 | 162.77 | 529 | 0.8795 | 0.6818 |
| 0.6674 | 164.0 | 533 | 0.8797 | 0.6783 |
| 0.6674 | 164.92 | 536 | 0.8707 | 0.6783 |
| 0.6674 | 165.85 | 539 | 0.8697 | 0.6783 |
| 0.6674 | 166.77 | 542 | 0.8724 | 0.6783 |
| 0.6674 | 168.0 | 546 | 0.8704 | 0.6748 |
| 0.6674 | 168.92 | 549 | 0.8694 | 0.6748 |
| 0.6305 | 169.85 | 552 | 0.8740 | 0.6748 |
| 0.6305 | 170.77 | 555 | 0.8713 | 0.6748 |
| 0.6305 | 172.0 | 559 | 0.8682 | 0.6783 |
| 0.6305 | 172.92 | 562 | 0.8688 | 0.6783 |
| 0.6305 | 173.85 | 565 | 0.8693 | 0.6818 |
| 0.6305 | 174.77 | 568 | 0.8744 | 0.6783 |
| 0.6305 | 176.0 | 572 | 0.8760 | 0.6783 |
| 0.6305 | 176.92 | 575 | 0.8696 | 0.6853 |
| 0.6305 | 177.85 | 578 | 0.8669 | 0.6853 |
| 0.6305 | 178.77 | 581 | 0.8641 | 0.6853 |
| 0.6305 | 180.0 | 585 | 0.8697 | 0.6713 |
| 0.6305 | 180.92 | 588 | 0.8678 | 0.6748 |
| 0.6305 | 181.85 | 591 | 0.8621 | 0.6818 |
| 0.6305 | 182.77 | 594 | 0.8557 | 0.6888 |
| 0.6305 | 184.0 | 598 | 0.8481 | 0.6888 |
| 0.6095 | 184.92 | 601 | 0.8429 | 0.6888 |
| 0.6095 | 185.85 | 604 | 0.8413 | 0.6888 |
| 0.6095 | 186.77 | 607 | 0.8402 | 0.6923 |
| 0.6095 | 188.0 | 611 | 0.8415 | 0.6888 |
| 0.6095 | 188.92 | 614 | 0.8410 | 0.6923 |
| 0.6095 | 189.85 | 617 | 0.8389 | 0.6853 |
| 0.6095 | 190.77 | 620 | 0.8354 | 0.6853 |
| 0.6095 | 192.0 | 624 | 0.8357 | 0.6888 |
| 0.6095 | 192.92 | 627 | 0.8401 | 0.6958 |
| 0.6095 | 193.85 | 630 | 0.8449 | 0.6958 |
| 0.6095 | 194.77 | 633 | 0.8479 | 0.6958 |
| 0.6095 | 196.0 | 637 | 0.8455 | 0.6923 |
| 0.6095 | 196.92 | 640 | 0.8422 | 0.6923 |
| 0.6095 | 197.85 | 643 | 0.8425 | 0.6923 |
| 0.6095 | 198.77 | 646 | 0.8437 | 0.6923 |
| 0.5908 | 200.0 | 650 | 0.8367 | 0.6958 |
| 0.5908 | 200.92 | 653 | 0.8347 | 0.6993 |
| 0.5908 | 201.85 | 656 | 0.8287 | 0.6958 |
| 0.5908 | 202.77 | 659 | 0.8260 | 0.6923 |
| 0.5908 | 204.0 | 663 | 0.8264 | 0.6958 |
| 0.5908 | 204.92 | 666 | 0.8295 | 0.6958 |
| 0.5908 | 205.85 | 669 | 0.8302 | 0.6923 |
| 0.5908 | 206.77 | 672 | 0.8285 | 0.6923 |
| 0.5908 | 208.0 | 676 | 0.8311 | 0.6923 |
| 0.5908 | 208.92 | 679 | 0.8321 | 0.6923 |
| 0.5908 | 209.85 | 682 | 0.8306 | 0.6923 |
| 0.5908 | 210.77 | 685 | 0.8303 | 0.6923 |
| 0.5908 | 212.0 | 689 | 0.8256 | 0.6993 |
| 0.5908 | 212.92 | 692 | 0.8230 | 0.6958 |
| 0.5908 | 213.85 | 695 | 0.8194 | 0.6958 |
| 0.5908 | 214.77 | 698 | 0.8183 | 0.6958 |
| 0.5763 | 216.0 | 702 | 0.8232 | 0.6958 |
| 0.5763 | 216.92 | 705 | 0.8237 | 0.6888 |
| 0.5763 | 217.85 | 708 | 0.8196 | 0.6993 |
| 0.5763 | 218.77 | 711 | 0.8142 | 0.6993 |
| 0.5763 | 220.0 | 715 | 0.8115 | 0.6993 |
| 0.5763 | 220.92 | 718 | 0.8130 | 0.6993 |
| 0.5763 | 221.85 | 721 | 0.8156 | 0.7028 |
| 0.5763 | 222.77 | 724 | 0.8201 | 0.6958 |
| 0.5763 | 224.0 | 728 | 0.8227 | 0.6958 |
| 0.5763 | 224.92 | 731 | 0.8232 | 0.6958 |
| 0.5763 | 225.85 | 734 | 0.8198 | 0.6923 |
| 0.5763 | 226.77 | 737 | 0.8151 | 0.6923 |
| 0.5763 | 228.0 | 741 | 0.8136 | 0.6923 |
| 0.5763 | 228.92 | 744 | 0.8134 | 0.6923 |
| 0.5763 | 229.85 | 747 | 0.8123 | 0.6958 |
| 0.57 | 230.77 | 750 | 0.8095 | 0.6958 |
| 0.57 | 232.0 | 754 | 0.8082 | 0.6958 |
| 0.57 | 232.92 | 757 | 0.8084 | 0.6958 |
| 0.57 | 233.85 | 760 | 0.8114 | 0.6923 |
| 0.57 | 234.77 | 763 | 0.8130 | 0.6923 |
| 0.57 | 236.0 | 767 | 0.8154 | 0.6923 |
| 0.57 | 236.92 | 770 | 0.8160 | 0.6923 |
| 0.57 | 237.85 | 773 | 0.8126 | 0.6888 |
| 0.57 | 238.77 | 776 | 0.8114 | 0.6888 |
| 0.57 | 240.0 | 780 | 0.8041 | 0.6923 |
| 0.57 | 240.92 | 783 | 0.8006 | 0.6923 |
| 0.57 | 241.85 | 786 | 0.7987 | 0.6958 |
| 0.57 | 242.77 | 789 | 0.7977 | 0.6993 |
| 0.57 | 244.0 | 793 | 0.8001 | 0.6993 |
| 0.57 | 244.92 | 796 | 0.8044 | 0.6958 |
| 0.57 | 245.85 | 799 | 0.8082 | 0.6958 |
| 0.5456 | 246.77 | 802 | 0.8121 | 0.6888 |
| 0.5456 | 248.0 | 806 | 0.8107 | 0.6888 |
| 0.5456 | 248.92 | 809 | 0.8064 | 0.6958 |
| 0.5456 | 249.85 | 812 | 0.8042 | 0.6958 |
| 0.5456 | 250.77 | 815 | 0.8006 | 0.6958 |
| 0.5456 | 252.0 | 819 | 0.7969 | 0.6958 |
| 0.5456 | 252.92 | 822 | 0.7955 | 0.6993 |
| 0.5456 | 253.85 | 825 | 0.7973 | 0.6958 |
| 0.5456 | 254.77 | 828 | 0.8001 | 0.6958 |
| 0.5456 | 256.0 | 832 | 0.8035 | 0.6888 |
| 0.5456 | 256.92 | 835 | 0.8035 | 0.6853 |
| 0.5456 | 257.85 | 838 | 0.8012 | 0.6923 |
| 0.5456 | 258.77 | 841 | 0.8000 | 0.6923 |
| 0.5456 | 260.0 | 845 | 0.7963 | 0.6888 |
| 0.5456 | 260.92 | 848 | 0.7928 | 0.6958 |
| 0.5369 | 261.85 | 851 | 0.7919 | 0.6923 |
| 0.5369 | 262.77 | 854 | 0.7913 | 0.6888 |
| 0.5369 | 264.0 | 858 | 0.7929 | 0.6888 |
| 0.5369 | 264.92 | 861 | 0.7955 | 0.6818 |
| 0.5369 | 265.85 | 864 | 0.7963 | 0.6853 |
| 0.5369 | 266.77 | 867 | 0.7952 | 0.6888 |
| 0.5369 | 268.0 | 871 | 0.7936 | 0.6888 |
| 0.5369 | 268.92 | 874 | 0.7929 | 0.6853 |
| 0.5369 | 269.85 | 877 | 0.7933 | 0.6853 |
| 0.5369 | 270.77 | 880 | 0.7941 | 0.6853 |
| 0.5369 | 272.0 | 884 | 0.7940 | 0.6853 |
| 0.5369 | 272.92 | 887 | 0.7929 | 0.6853 |
| 0.5369 | 273.85 | 890 | 0.7930 | 0.6853 |
| 0.5369 | 274.77 | 893 | 0.7943 | 0.6853 |
| 0.5369 | 276.0 | 897 | 0.7944 | 0.6853 |
| 0.5388 | 276.92 | 900 | 0.7933 | 0.6853 |
| 0.5388 | 277.85 | 903 | 0.7914 | 0.6853 |
| 0.5388 | 278.77 | 906 | 0.7904 | 0.6853 |
| 0.5388 | 280.0 | 910 | 0.7888 | 0.6853 |
| 0.5388 | 280.92 | 913 | 0.7900 | 0.6853 |
| 0.5388 | 281.85 | 916 | 0.7906 | 0.6853 |
| 0.5388 | 282.77 | 919 | 0.7911 | 0.6853 |
| 0.5388 | 284.0 | 923 | 0.7907 | 0.6853 |
| 0.5388 | 284.92 | 926 | 0.7907 | 0.6853 |
| 0.5388 | 285.85 | 929 | 0.7905 | 0.6818 |
| 0.5388 | 286.77 | 932 | 0.7900 | 0.6818 |
| 0.5388 | 288.0 | 936 | 0.7901 | 0.6853 |
| 0.5388 | 288.92 | 939 | 0.7902 | 0.6853 |
| 0.5388 | 289.85 | 942 | 0.7910 | 0.6853 |
| 0.5388 | 290.77 | 945 | 0.7914 | 0.6888 |
| 0.5388 | 292.0 | 949 | 0.7920 | 0.6888 |
| 0.5261 | 292.92 | 952 | 0.7928 | 0.6853 |
| 0.5261 | 293.85 | 955 | 0.7932 | 0.6888 |
| 0.5261 | 294.77 | 958 | 0.7925 | 0.6888 |
| 0.5261 | 296.0 | 962 | 0.7922 | 0.6888 |
| 0.5261 | 296.92 | 965 | 0.7919 | 0.6888 |
| 0.5261 | 297.85 | 968 | 0.7922 | 0.6888 |
| 0.5261 | 298.77 | 971 | 0.7921 | 0.6888 |
| 0.5261 | 300.0 | 975 | 0.7912 | 0.6853 |
| 0.5261 | 300.92 | 978 | 0.7907 | 0.6853 |
| 0.5261 | 301.85 | 981 | 0.7896 | 0.6853 |
| 0.5261 | 302.77 | 984 | 0.7885 | 0.6888 |
| 0.5261 | 304.0 | 988 | 0.7877 | 0.6888 |
| 0.5261 | 304.92 | 991 | 0.7874 | 0.6888 |
| 0.5261 | 305.85 | 994 | 0.7876 | 0.6888 |
| 0.5261 | 306.77 | 997 | 0.7879 | 0.6888 |
| 0.5188 | 308.0 | 1001 | 0.7884 | 0.6888 |
| 0.5188 | 308.92 | 1004 | 0.7887 | 0.6888 |
| 0.5188 | 309.85 | 1007 | 0.7890 | 0.6888 |
| 0.5188 | 310.77 | 1010 | 0.7894 | 0.6888 |
| 0.5188 | 312.0 | 1014 | 0.7899 | 0.6888 |
| 0.5188 | 312.92 | 1017 | 0.7904 | 0.6888 |
| 0.5188 | 313.85 | 1020 | 0.7907 | 0.6923 |
| 0.5188 | 314.77 | 1023 | 0.7910 | 0.6923 |
| 0.5188 | 316.0 | 1027 | 0.7912 | 0.6923 |
| 0.5188 | 316.92 | 1030 | 0.7912 | 0.6923 |
| 0.5188 | 317.85 | 1033 | 0.7912 | 0.6923 |
| 0.5188 | 318.77 | 1036 | 0.7913 | 0.6923 |
| 0.5188 | 320.0 | 1040 | 0.7913 | 0.6923 |
| 0.5188 | 320.92 | 1043 | 0.7912 | 0.6923 |
| 0.5188 | 321.85 | 1046 | 0.7912 | 0.6923 |
| 0.5188 | 322.77 | 1049 | 0.7911 | 0.6923 |
| 0.5194 | 323.08 | 1050 | 0.7911 | 0.6923 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
silent666/Qwen-Qwen1.5-0.5B-1718827148
|
silent666
| 2024-06-19T19:59:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2024-06-19T19:59:08Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
vxst2/rl_course_vizdoom_health_gathering_supreme
|
vxst2
| 2024-06-19T19:59:10Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-19T19:58:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.28 +/- 2.50
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r vxst2/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
silent666/google-gemma-2b-1718827098
|
silent666
| 2024-06-19T19:58:19Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2024-06-19T19:58:18Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
ekaterina-blatova-jb/model__v0
|
ekaterina-blatova-jb
| 2024-06-19T19:55:52Z | 209 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T19:54:16Z |
---
{}
---
## Evaluation results
Validation loss on the whole input: 0.7323479452170432
Validation loss on completion: 0.7441273222211748
|
silent666/Qwen-Qwen1.5-1.8B-1718826892
|
silent666
| 2024-06-19T19:54:53Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2024-06-19T19:54:52Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
milenarmus/TTB_tallying_noisy_flipped_choice_shuffled_cue_order-model_noise0.8
|
milenarmus
| 2024-06-19T19:52:47Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T19:48:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/catgirl_cpt_inst-GGUF
|
mradermacher
| 2024-06-19T19:46:40Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:k8tems/catgirl_cpt_inst",
"base_model:quantized:k8tems/catgirl_cpt_inst",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-19T18:53:41Z |
---
base_model: k8tems/catgirl_cpt_inst
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/k8tems/catgirl_cpt_inst
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.IQ3_XS.gguf) | IQ3_XS | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.IQ3_S.gguf) | IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.IQ3_M.gguf) | IQ3_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.Q8_0.gguf) | Q8_0 | 7.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/catgirl_cpt_inst-GGUF/resolve/main/catgirl_cpt_inst.f16.gguf) | f16 | 14.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
oohtmeel/Bert_protein_classifier
|
oohtmeel
| 2024-06-19T19:43:33Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"proteins",
"Biology",
"classifier",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-19T16:38:59Z |
---
tags:
- proteins
- Biology
- classifier
---
## Model info
A finetuned Bert-Base-Uncased model for a multilabel classificaton task involving predicting
protein functions based on their amino acid sequences.
The model takes sequence data and protein class name as input and
outputs probability scores. (How likely is it that this sequence belongs to this group)
## Model Usage
The way you use this model for a demo is, you paste a protein sequence
into the inference box and it outputs the relevant probabilities that certain GO terms are
associated with that sequence.
For example MMSTTHLLVFLLGVVTLTTPTFGTYESPNYGKPPTPVFKPPKVKPPPYEPKPPVYEPPKKEKPEPKPPVYAPPKKEKHGPKPTMYEPPKKEKPEPKPPVYTPPKKEVPKPKPPVYEPPKKEKPEPKPPIYTPPKKEKPEPKPPVYEPPKKEKPEPKPPVYTPPKKEKPEPKPPVYEPPKKPPMYEPKPPKPPVYTPPKKEKPEPKPPMYEPPKKPPMYEPKPPKPPVYTPPKKEKPEPKPPMYQPPNNPPIYEPKPPKPPVYAPPKEEKPKPKPPVYEPPAHEPPYGHYPGHPPLGKPQ
outputs the following score
```
[
[
{
"label": "GO:0000122",
"score": 0.29775485396385193
},
{
"label": "GO:0000070",
"score": 0.10477513074874878
},
{
"label": "GO:0000075",
"score": 0.08593793958425522
},
{
"label": "GO:0000118",
"score": 0.05860009789466858
},
{
"label": "GO:0000082",
"score": 0.05373986065387726
},
{
"label": "GO:0000077",
"score": 0.03928716108202934
},
{
"label": "GO:0000096",
"score": 0.03705739229917526
},
{
"label": "GO:0000079",
"score": 0.02797592058777809
},
{
"label": "GO:0000045",
"score": 0.026528609916567802
},
{
"label": "GO:0000097",
"score": 0.026119187474250793
},
{
"label": "GO:0000086",
"score": 0.019697198644280434
},
{
"label": "GO:0000049",
"score": 0.018551582470536232
},
{
"label": "GO:0000041",
"score": 0.016929756850004196
},
{
"label": "GO:0000054",
"score": 0.015105823054909706
},
{
"label": "GO:0000083",
"score": 0.01434631273150444
},
{
"label": "GO:0000105",
"score": 0.013960960321128368
},
{
"label": "GO:0000076",
"score": 0.013064960949122906
},
{
"label": "GO:0000109",
"score": 0.012523632496595383
},
{
"label": "GO:0000113",
"score": 0.012152223847806454
},
{
"label": "GO:0000062",
"score": 0.01127714291214943
},
{
"label": "GO:0000101",
"score": 0.011041304096579552
},
```
|
rhysjones/gpt2-124M-edu-fineweb-10B
|
rhysjones
| 2024-06-19T19:43:29Z | 444 | 6 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"dataset:HuggingFaceFW/fineweb-edu",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-07T17:31:52Z |
---
datasets: HuggingFaceFW/fineweb-edu
widget:
- example_title: Example interaction
text: During photosynthesis in green plants
inference:
parameters:
repetition_penalty: 1.3
language:
- en
library_name: transformers
license: mit
---
# Model Card for gpt2-124M-edu-fineweb-10B
A 124M parameter GPT2 model trained with the 10B fineweb-edu dataset using [https://github.com/karpathy/llm.c](https://github.com/karpathy/llm.c)
Training took 20 hours on a single 4090 GPU (limited to 350W), giving the following graphs:

## Training
The training parameters where:
```
./train_gpt2cu \
-i "dev/data/edu_fineweb10B/edu_fineweb_train_*.bin" \
-j "dev/data/edu_fineweb10B/edu_fineweb_val_*.bin" \
-o log124M \
-e "d12" \
-b 56 -t 1024 \
-d 458752 \
-r 1 \
-z 1 \
-c 0.1 \
-l 0.002 \
-q 0.0 \
-u 700 \
-n 5000 \
-v 250 -s 20000 \
-h 1
```
The model has had no further finetuning.
## Evaluation
Evals using [Eleuther AI Harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
as described in the [open_llm_leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
and comparing with those published for [openai-community/gpt2](https://huggingface.co/openai-community/gpt2)

| Eval Test | Score |
| --------- | ----- |
| arc_challenge (25 shot) | 24.49 |
| gsm8k (5 shot) | 0.08 |
| hellaswag (10 shot) | 32.64 |
| mmlu (5 shot) | 26.06 |
| truthfulqa (0 shot) | 42.45 |
| winogrande (5 shot) | 52.17 |
| **Overall Score** | **29.65** |
|
DreamGallery/Qwen-Qwen1.5-7B-1718826171
|
DreamGallery
| 2024-06-19T19:42:52Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"region:us"
] | null | 2024-06-19T19:42:51Z |
---
base_model: Qwen/Qwen1.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
tomekkorbak/introspection-test2
|
tomekkorbak
| 2024-06-19T19:41:47Z | 40 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T14:13:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cibfaye/nllb-en-wo-v1
|
cibfaye
| 2024-06-19T19:41:01Z | 105 | 0 |
transformers
|
[
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-19T19:15:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.