modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-04 12:28:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 539
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-04 12:28:29
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Downtown-Case/CohereForAI_c4ai-command-r-08-2024-exl2-3.75bpw
|
Downtown-Case
| 2024-08-30T22:15:59Z | 26 | 1 |
transformers
|
[
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-08-30T21:40:34Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
---
# Model Card for C4AI Command R 08-2024
## Model Summary
<!-- Provide a quick summary of what the model is/does. -->
C4AI Command R 08-2024 is a research release of a 35 billion parameter highly performant generative model. Command R 08-2024 is a large language model with open weights optimized for a variety of use cases including reasoning, summarization, and question answering. Command R 08-2024 has the capability for multilingual generation, trained on 23 languages and evaluated in 10 languages and highly performant RAG capabilities.
Developed by: Cohere and [Cohere For AI](https://cohere.for.ai)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: c4ai-command-r-08-2024
- Model Size: 35 billion parameters
- Context length: 128K
**Try C4AI Command R**
If you want to try Command R before downloading the weights, the model is hosted in a hugging face space [here](https://huggingface.co/spaces/CohereForAI/c4ai-command?model=command-r-08-2024).
**Usage**
Please use `transformers` version 4.39.1 or higher
```python
# pip install 'transformers>=4.39.1'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-08-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: This is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine-tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. We use grouped query attention (GQA) to improve inference speed.
**Languages covered**: The model has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian) and evaluated on 10 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese).
**Context length**: Command R 08-2024 supports a context length of 128K.
### Tool use & Agent capabilities:
Command R 08-2024 has been specifically trained with conversational tool use capabilities. These have been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template will likely reduce performance.
Command R 08-2024’s tool use functionality takes a conversation as input (with an optional user-system preamble), along with a list of available tools. The model will then generate a json-formatted list of actions to execute on a subset of those tools. Command R 08-2024 may use one of its supplied tools more than once.
The model has been trained to recognise a special `directly_answer` tool, which it uses to indicate that it doesn’t want to use any of its other tools. The ability to abstain from calling a specific tool can be useful in a range of situations, such as greeting a user, or asking clarifying questions. We recommend including the `directly_answer` tool, but it can be removed or renamed if required.
Comprehensive documentation for working with Command R 08-2024's tool use prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
Command R 08-2024 also supports Hugging Face's [tool use API](https://huggingface.co/docs/transformers/main/en/chat_templating#advanced-tool-use--function-calling)
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary><b>Usage: Rendering Tool Use Prompts [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use:
tools = [
{
"name": "internet_search",
"description": "Returns a list of relevant document snippets for a textual query retrieved from the internet",
"parameter_definitions": {
"query": {
"description": "Query to search the internet with",
"type": 'str',
"required": True
}
}
},
{
'name': "directly_answer",
"description": "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
'parameter_definitions': {}
}
]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_tool_use_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Usage: Rendering prompts with the Tool Use API [CLICK TO EXPAND]</b> </summary>
```python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# Define tools available for the model to use
# Type hints and docstrings from Python functions are automatically extracted
def internet_search(query: str):
"""
Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query: Query to search the internet with
"""
pass
def directly_answer():
"""
Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
tools = [internet_search, directly_answer]
# render the tool use prompt as a string:
tool_use_prompt = tokenizer.apply_chat_template(
conversation,
tools=tools,
tokenize=False,
add_generation_prompt=True,
)
print(tool_use_prompt)
```
</details>
<details>
<summary><b>Example Rendered Tool Use Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.
## Available Tools
Here is a list of tools that you have available to you:
```python
def internet_search(query: str) -> List[Dict]:
"""Returns a list of relevant document snippets for a textual query retrieved from the internet
Args:
query (str): Query to search the internet with
"""
pass
```
```python
def directly_answer() -> List[Dict]:
"""Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history
"""
pass
```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write 'Action:' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user's last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:
```json
[
{
"tool_name": title of the tool in the specification,
"parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters
}
]```<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Tool Use Completion [CLICK TO EXPAND]</b></summary>
````
Action: ```json
[
{
"tool_name": "internet_search",
"parameters": {
"query": "biggest penguin in the world"
}
}
]
```
````
</details>
### Grounded Generation and RAG Capabilities:
Command R 08-2024 has been specifically trained with grounded generation capabilities. This means that it can generate responses based on a list of supplied document snippets, and it will include grounding spans (citations) in its response indicating the source of the information. This can be used to enable behaviors such as grounded summarization and the final step of Retrieval Augmented Generation (RAG).This behavior has been trained into the model via a mixture of supervised fine-tuning and preference fine-tuning, using a specific prompt template. Deviating from this prompt template may reduce performance, but we encourage experimentation.
Command R 08-2024’s grounded generation behavior takes a conversation as input (with an optional user-supplied system preamble, indicating task, context and desired output style), along with a list of retrieved document snippets. The document snippets should be chunks, rather than long documents, typically around 100-400 words per chunk. Document snippets consist of key-value pairs. The keys should be short descriptive strings, the values can be text or semi-structured.
By default, Command R 08-2024 will generate grounded responses by first predicting which documents are relevant, then predicting which ones it will cite, then generating an answer. Finally, it will then insert grounding spans into the answer. See below for an example. This is referred to as `accurate` grounded generation.
The model is trained with a number of other answering modes, which can be selected by prompt changes. A `fast` citation mode is supported in the tokenizer, which will directly generate an answer with grounding spans in it, without first writing the answer out in full. This sacrifices some grounding accuracy in favor of generating fewer tokens.
Comprehensive documentation for working with Command R 08-2024's grounded generation prompt template can be found [here](https://docs.cohere.com/docs/prompting-command-r).
The code snippet below shows a minimal working example on how to render a prompt.
<details>
<summary> <b>Usage: Rendering Grounded Generation prompts [CLICK TO EXPAND]</b> </summary>
````python
from transformers import AutoTokenizer
model_id = "CohereForAI/c4ai-command-r-08-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
# define conversation input:
conversation = [
{"role": "user", "content": "Whats the biggest penguin in the world?"}
]
# define documents to ground on:
documents = [
{ "title": "Tall penguins", "text": "Emperor penguins are the tallest growing up to 122 cm in height." },
{ "title": "Penguin habitats", "text": "Emperor penguins only live in Antarctica."}
]
# render the tool use prompt as a string:
grounded_generation_prompt = tokenizer.apply_grounded_generation_template(
conversation,
documents=documents,
citation_mode="accurate", # or "fast"
tokenize=False,
add_generation_prompt=True,
)
print(grounded_generation_prompt)
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Prompt [CLICK TO EXPAND]</b></summary>
````
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|># Safety Preamble
The instructions in this section override those in the task description and style guide sections. Don't answer questions that are harmful or immoral.
# System Preamble
## Basic Rules
You are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user's requests, you cite your sources in your answers, according to those instructions.
# User Preamble
## Task and Context
You help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user's needs as best you can, which will be wide-ranging.
## Style Guide
Unless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Whats the biggest penguin in the world?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|><results>
Document: 0
title: Tall penguins
text: Emperor penguins are the tallest growing up to 122 cm in height.
Document: 1
title: Penguin habitats
text: Emperor penguins only live in Antarctica.
</results><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Carefully perform the following instructions, in order, starting each with a new line.
Firstly, Decide which of the retrieved documents are relevant to the user's last input by writing 'Relevant Documents:' followed by comma-separated list of document numbers. If none are relevant, you should instead write 'None'.
Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user's last input by writing 'Cited Documents:' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write 'None'.
Thirdly, Write 'Answer:' followed by a response to the user's last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.
Finally, Write 'Grounded answer:' followed by a response to the user's last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
````
</details>
<details>
<summary><b>Example Rendered Grounded Generation Completion [CLICK TO EXPAND]</b></summary>
````
Relevant Documents: 0,1
Cited Documents: 0,1
Answer: The Emperor Penguin is the tallest or biggest penguin in the world. It is a bird that lives only in Antarctica and grows to a height of around 122 centimetres.
Grounded answer: The <co: 0>Emperor Penguin</co: 0> is the <co: 0>tallest</co: 0> or biggest penguin in the world. It is a bird that <co: 1>lives only in Antarctica</co: 1> and <co: 0>grows to a height of around 122 centimetres.</co: 0>
````
</details>
### Code Capabilities:
Command R 08-2024 has been optimized to interact with your code, by requesting code snippets, code explanations, or code rewrites. It might not perform well out-of-the-box for pure code completion. For better performance, we also recommend using a low temperature (and even greedy decoding) for code-generation related instructions.
### Model Card Contact
For errors or additional questions about details in this model card, contact [info@for.ai](mailto:info@for.ai).
### Terms of Use:
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try Chat:
You can try Command-R chat in the playground [here](https://dashboard.cohere.com/playground/chat).
|
mradermacher/Einstein-Replete-7B-GGUF
|
mradermacher
| 2024-08-30T22:13:14Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NotASI/Einstein-Replete-7B",
"base_model:quantized:NotASI/Einstein-Replete-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-29T08:06:38Z |
---
base_model: NotASI/Einstein-Replete-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NotASI/Einstein-Replete-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.IQ3_S.gguf) | IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.IQ3_M.gguf) | IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF/resolve/main/Einstein-Replete-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Einstein-Replete-7B-i1-GGUF
|
mradermacher
| 2024-08-30T22:13:10Z | 32 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:NotASI/Einstein-Replete-7B",
"base_model:quantized:NotASI/Einstein-Replete-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-29T10:07:35Z |
---
base_model: NotASI/Einstein-Replete-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/NotASI/Einstein-Replete-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Einstein-Replete-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Einstein-Replete-7B-i1-GGUF/resolve/main/Einstein-Replete-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mergekit-community/mergekit-ties-svasnwf
|
mergekit-community
| 2024-08-30T22:09:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:merge:Hastagaras/Jamet-8B-L3-MK.V-Blackroot",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:merge:Vikhrmodels/Vikhr-7B-instruct_0.4",
"base_model:crestf411/L3.1-8B-sunfall-v0.6.1-dpo",
"base_model:merge:crestf411/L3.1-8B-sunfall-v0.6.1-dpo",
"base_model:maldv/badger-iota-llama-3-8b",
"base_model:merge:maldv/badger-iota-llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T22:05:06Z |
---
base_model:
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- Vikhrmodels/Vikhr-7B-instruct_0.4
- crestf411/L3.1-8B-sunfall-v0.6.1-dpo
- Sao10K/L3-8B-Stheno-v3.2
- maldv/badger-iota-llama-3-8b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Vikhrmodels/Vikhr-7B-instruct_0.4](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4) as a base.
### Models Merged
The following models were included in the merge:
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
* [crestf411/L3.1-8B-sunfall-v0.6.1-dpo](https://huggingface.co/crestf411/L3.1-8B-sunfall-v0.6.1-dpo)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [maldv/badger-iota-llama-3-8b](https://huggingface.co/maldv/badger-iota-llama-3-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Vikhrmodels/Vikhr-7B-instruct_0.4
- model: crestf411/L3.1-8B-sunfall-v0.6.1-dpo # Another RP Model trained on... stuff
parameters:
density: 0.4
weight: 0.25
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot # Another RP / Storytelling Model
parameters:
density: 0.5
weight: 0.3
- model: maldv/badger-iota-llama-3-8b #Megamerge - Helps with General Knowledge
parameters:
density: 0.6
weight: 0.35
- model: Sao10K/L3-8B-Stheno-v3.2 # This is Stheno v3.2's Initial Name
parameters:
density: 0.7
weight: 0.4
merge_method: ties
base_model: Vikhrmodels/Vikhr-7B-instruct_0.4
parameters:
int8_mask: true
rescale: true
normalize: false
dtype: bfloat16
```
|
shashank970613/lora-flan-t5-large-chat
|
shashank970613
| 2024-08-30T22:08:32Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-08-30T20:52:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gglabs/Mistral-Nemo-12B-FC-Chat-0830-2-epoch
|
gglabs
| 2024-08-30T22:06:30Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T21:31:36Z |
---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ellarose/test-trainer-alternate
|
ellarose
| 2024-08-30T22:06:04Z | 5 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-08-28T21:54:34Z |
---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: ballades (formes fixes)
- text: prison fiction
- text: gregorian chants
- text: argentina--buenos aires, port of
- text: passepieds (music)
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9555555555555556
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-----------|:-----------------------------------------------------------------------------------------------------------------|
| subject | <ul><li>'vidourle river (france)'</li><li>'knockout kings 2000 (game)'</li><li>'social practice (art)'</li></ul> |
| genre/form | <ul><li>'hadith stories'</li><li>'discographies'</li><li>'dance drama'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9556 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("ellarose/test-trainer-alternate")
# Run inference
preds = model("prison fiction")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 2.415 | 10 |
| Label | Training Sample Count |
|:-----------|:----------------------|
| subject | 500 |
| genre/form | 500 |
### Training Hyperparameters
- batch_size: (40, 40)
- num_epochs: (10, 10)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:---------:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.3379 | - |
| 0.0040 | 50 | 0.3311 | - |
| 0.0080 | 100 | 0.3642 | - |
| 0.0120 | 150 | 0.3077 | - |
| 0.0160 | 200 | 0.2542 | - |
| 0.0200 | 250 | 0.233 | - |
| 0.0240 | 300 | 0.23 | - |
| 0.0279 | 350 | 0.2247 | - |
| 0.0319 | 400 | 0.2009 | - |
| 0.0359 | 450 | 0.2354 | - |
| 0.0399 | 500 | 0.1823 | - |
| 0.0439 | 550 | 0.2048 | - |
| 0.0479 | 600 | 0.1546 | - |
| 0.0519 | 650 | 0.1363 | - |
| 0.0559 | 700 | 0.1031 | - |
| 0.0599 | 750 | 0.0668 | - |
| 0.0639 | 800 | 0.1156 | - |
| 0.0679 | 850 | 0.0435 | - |
| 0.0719 | 900 | 0.0495 | - |
| 0.0758 | 950 | 0.046 | - |
| 0.0798 | 1000 | 0.0424 | - |
| 0.0838 | 1050 | 0.1312 | - |
| 0.0878 | 1100 | 0.0246 | - |
| 0.0918 | 1150 | 0.0273 | - |
| 0.0958 | 1200 | 0.0075 | - |
| 0.0998 | 1250 | 0.0203 | - |
| 0.1038 | 1300 | 0.0073 | - |
| 0.1078 | 1350 | 0.0328 | - |
| 0.1118 | 1400 | 0.0274 | - |
| 0.1158 | 1450 | 0.0042 | - |
| 0.1198 | 1500 | 0.0494 | - |
| 0.1238 | 1550 | 0.0413 | - |
| 0.1277 | 1600 | 0.0036 | - |
| 0.1317 | 1650 | 0.0329 | - |
| 0.1357 | 1700 | 0.0168 | - |
| 0.1397 | 1750 | 0.0028 | - |
| 0.1437 | 1800 | 0.0227 | - |
| 0.1477 | 1850 | 0.002 | - |
| 0.1517 | 1900 | 0.0121 | - |
| 0.1557 | 1950 | 0.0018 | - |
| 0.1597 | 2000 | 0.0019 | - |
| 0.1637 | 2050 | 0.001 | - |
| 0.1677 | 2100 | 0.0009 | - |
| 0.1717 | 2150 | 0.0012 | - |
| 0.1756 | 2200 | 0.0007 | - |
| 0.1796 | 2250 | 0.001 | - |
| 0.1836 | 2300 | 0.0008 | - |
| 0.1876 | 2350 | 0.0009 | - |
| 0.1916 | 2400 | 0.001 | - |
| 0.1956 | 2450 | 0.0009 | - |
| 0.1996 | 2500 | 0.0247 | - |
| 0.2036 | 2550 | 0.0007 | - |
| 0.2076 | 2600 | 0.0008 | - |
| 0.2116 | 2650 | 0.0008 | - |
| 0.2156 | 2700 | 0.0006 | - |
| 0.2196 | 2750 | 0.0023 | - |
| 0.2236 | 2800 | 0.0007 | - |
| 0.2275 | 2850 | 0.0004 | - |
| 0.2315 | 2900 | 0.0054 | - |
| 0.2355 | 2950 | 0.0007 | - |
| 0.2395 | 3000 | 0.0004 | - |
| 0.2435 | 3050 | 0.0007 | - |
| 0.2475 | 3100 | 0.0244 | - |
| 0.2515 | 3150 | 0.0243 | - |
| 0.2555 | 3200 | 0.0005 | - |
| 0.2595 | 3250 | 0.0004 | - |
| 0.2635 | 3300 | 0.0004 | - |
| 0.2675 | 3350 | 0.0004 | - |
| 0.2715 | 3400 | 0.0004 | - |
| 0.2754 | 3450 | 0.0482 | - |
| 0.2794 | 3500 | 0.0004 | - |
| 0.2834 | 3550 | 0.0005 | - |
| 0.2874 | 3600 | 0.0005 | - |
| 0.2914 | 3650 | 0.0007 | - |
| 0.2954 | 3700 | 0.0063 | - |
| 0.2994 | 3750 | 0.0043 | - |
| 0.3034 | 3800 | 0.0005 | - |
| 0.3074 | 3850 | 0.0366 | - |
| 0.3114 | 3900 | 0.0245 | - |
| 0.3154 | 3950 | 0.0242 | - |
| 0.3194 | 4000 | 0.0003 | - |
| 0.3234 | 4050 | 0.0007 | - |
| 0.3273 | 4100 | 0.0123 | - |
| 0.3313 | 4150 | 0.0004 | - |
| 0.3353 | 4200 | 0.0007 | - |
| 0.3393 | 4250 | 0.0238 | - |
| 0.3433 | 4300 | 0.0002 | - |
| 0.3473 | 4350 | 0.0238 | - |
| 0.3513 | 4400 | 0.0003 | - |
| 0.3553 | 4450 | 0.0224 | - |
| 0.3593 | 4500 | 0.0006 | - |
| 0.3633 | 4550 | 0.0005 | - |
| 0.3673 | 4600 | 0.0004 | - |
| 0.3713 | 4650 | 0.0025 | - |
| 0.3752 | 4700 | 0.0003 | - |
| 0.3792 | 4750 | 0.0218 | - |
| 0.3832 | 4800 | 0.001 | - |
| 0.3872 | 4850 | 0.0004 | - |
| 0.3912 | 4900 | 0.0004 | - |
| 0.3952 | 4950 | 0.0161 | - |
| 0.3992 | 5000 | 0.0008 | - |
| 0.4032 | 5050 | 0.0024 | - |
| 0.4072 | 5100 | 0.0003 | - |
| 0.4112 | 5150 | 0.0002 | - |
| 0.4152 | 5200 | 0.0005 | - |
| 0.4192 | 5250 | 0.0021 | - |
| 0.4232 | 5300 | 0.0235 | - |
| 0.4271 | 5350 | 0.0035 | - |
| 0.4311 | 5400 | 0.0007 | - |
| 0.4351 | 5450 | 0.0007 | - |
| 0.4391 | 5500 | 0.0217 | - |
| 0.4431 | 5550 | 0.0006 | - |
| 0.4471 | 5600 | 0.0054 | - |
| 0.4511 | 5650 | 0.002 | - |
| 0.4551 | 5700 | 0.0013 | - |
| 0.4591 | 5750 | 0.0026 | - |
| 0.4631 | 5800 | 0.0051 | - |
| 0.4671 | 5850 | 0.0003 | - |
| 0.4711 | 5900 | 0.0003 | - |
| 0.4750 | 5950 | 0.0119 | - |
| 0.4790 | 6000 | 0.0011 | - |
| 0.4830 | 6050 | 0.0253 | - |
| 0.4870 | 6100 | 0.0244 | - |
| 0.4910 | 6150 | 0.0002 | - |
| 0.4950 | 6200 | 0.0002 | - |
| 0.4990 | 6250 | 0.0002 | - |
| 0.5030 | 6300 | 0.0167 | - |
| 0.5070 | 6350 | 0.0002 | - |
| 0.5110 | 6400 | 0.0003 | - |
| 0.5150 | 6450 | 0.0012 | - |
| 0.5190 | 6500 | 0.003 | - |
| 0.5230 | 6550 | 0.0003 | - |
| 0.5269 | 6600 | 0.0003 | - |
| 0.5309 | 6650 | 0.0006 | - |
| 0.5349 | 6700 | 0.0026 | - |
| 0.5389 | 6750 | 0.0004 | - |
| 0.5429 | 6800 | 0.0001 | - |
| 0.5469 | 6850 | 0.0002 | - |
| 0.5509 | 6900 | 0.0003 | - |
| 0.5549 | 6950 | 0.0028 | - |
| 0.5589 | 7000 | 0.0022 | - |
| 0.5629 | 7050 | 0.0007 | - |
| 0.5669 | 7100 | 0.0004 | - |
| 0.5709 | 7150 | 0.0002 | - |
| 0.5749 | 7200 | 0.0001 | - |
| 0.5788 | 7250 | 0.0122 | - |
| 0.5828 | 7300 | 0.0017 | - |
| 0.5868 | 7350 | 0.0001 | - |
| 0.5908 | 7400 | 0.0002 | - |
| 0.5948 | 7450 | 0.0001 | - |
| 0.5988 | 7500 | 0.0003 | - |
| 0.6028 | 7550 | 0.0011 | - |
| 0.6068 | 7600 | 0.0002 | - |
| 0.6108 | 7650 | 0.0003 | - |
| 0.6148 | 7700 | 0.0001 | - |
| 0.6188 | 7750 | 0.0001 | - |
| 0.6228 | 7800 | 0.0001 | - |
| 0.6267 | 7850 | 0.0002 | - |
| 0.6307 | 7900 | 0.0149 | - |
| 0.6347 | 7950 | 0.0106 | - |
| 0.6387 | 8000 | 0.0015 | - |
| 0.6427 | 8050 | 0.0001 | - |
| 0.6467 | 8100 | 0.0009 | - |
| 0.6507 | 8150 | 0.0015 | - |
| 0.6547 | 8200 | 0.0306 | - |
| 0.6587 | 8250 | 0.0054 | - |
| 0.6627 | 8300 | 0.0011 | - |
| 0.6667 | 8350 | 0.0003 | - |
| 0.6707 | 8400 | 0.0001 | - |
| 0.6747 | 8450 | 0.0024 | - |
| 0.6786 | 8500 | 0.0001 | - |
| 0.6826 | 8550 | 0.0001 | - |
| 0.6866 | 8600 | 0.0001 | - |
| 0.6906 | 8650 | 0.0072 | - |
| 0.6946 | 8700 | 0.0001 | - |
| 0.6986 | 8750 | 0.0002 | - |
| 0.7026 | 8800 | 0.0001 | - |
| 0.7066 | 8850 | 0.0243 | - |
| 0.7106 | 8900 | 0.0001 | - |
| 0.7146 | 8950 | 0.0001 | - |
| 0.7186 | 9000 | 0.0001 | - |
| 0.7226 | 9050 | 0.0001 | - |
| 0.7265 | 9100 | 0.0001 | - |
| 0.7305 | 9150 | 0.0001 | - |
| 0.7345 | 9200 | 0.0008 | - |
| 0.7385 | 9250 | 0.021 | - |
| 0.7425 | 9300 | 0.0229 | - |
| 0.7465 | 9350 | 0.0001 | - |
| 0.7505 | 9400 | 0.002 | - |
| 0.7545 | 9450 | 0.0008 | - |
| 0.7585 | 9500 | 0.0225 | - |
| 0.7625 | 9550 | 0.0001 | - |
| 0.7665 | 9600 | 0.0041 | - |
| 0.7705 | 9650 | 0.0012 | - |
| 0.7745 | 9700 | 0.0034 | - |
| 0.7784 | 9750 | 0.0011 | - |
| 0.7824 | 9800 | 0.0008 | - |
| 0.7864 | 9850 | 0.0101 | - |
| 0.7904 | 9900 | 0.0039 | - |
| 0.7944 | 9950 | 0.0001 | - |
| 0.7984 | 10000 | 0.0005 | - |
| 0.8024 | 10050 | 0.0011 | - |
| 0.8064 | 10100 | 0.0025 | - |
| 0.8104 | 10150 | 0.0001 | - |
| 0.8144 | 10200 | 0.0003 | - |
| 0.8184 | 10250 | 0.0002 | - |
| 0.8224 | 10300 | 0.0002 | - |
| 0.8263 | 10350 | 0.0001 | - |
| 0.8303 | 10400 | 0.0007 | - |
| 0.8343 | 10450 | 0.0005 | - |
| 0.8383 | 10500 | 0.0005 | - |
| 0.8423 | 10550 | 0.0001 | - |
| 0.8463 | 10600 | 0.0206 | - |
| 0.8503 | 10650 | 0.0023 | - |
| 0.8543 | 10700 | 0.0001 | - |
| 0.8583 | 10750 | 0.0001 | - |
| 0.8623 | 10800 | 0.0001 | - |
| 0.8663 | 10850 | 0.0001 | - |
| 0.8703 | 10900 | 0.0001 | - |
| 0.8743 | 10950 | 0.0002 | - |
| 0.8782 | 11000 | 0.0007 | - |
| 0.8822 | 11050 | 0.0025 | - |
| 0.8862 | 11100 | 0.0001 | - |
| 0.8902 | 11150 | 0.0001 | - |
| 0.8942 | 11200 | 0.0001 | - |
| 0.8982 | 11250 | 0.0001 | - |
| 0.9022 | 11300 | 0.0047 | - |
| 0.9062 | 11350 | 0.0001 | - |
| 0.9102 | 11400 | 0.0002 | - |
| 0.9142 | 11450 | 0.0001 | - |
| 0.9182 | 11500 | 0.0013 | - |
| 0.9222 | 11550 | 0.0011 | - |
| 0.9261 | 11600 | 0.0001 | - |
| 0.9301 | 11650 | 0.001 | - |
| 0.9341 | 11700 | 0.0145 | - |
| 0.9381 | 11750 | 0.0001 | - |
| 0.9421 | 11800 | 0.0156 | - |
| 0.9461 | 11850 | 0.0001 | - |
| 0.9501 | 11900 | 0.0016 | - |
| 0.9541 | 11950 | 0.0001 | - |
| 0.9581 | 12000 | 0.0011 | - |
| 0.9621 | 12050 | 0.002 | - |
| 0.9661 | 12100 | 0.0001 | - |
| 0.9701 | 12150 | 0.0004 | - |
| 0.9741 | 12200 | 0.0007 | - |
| 0.9780 | 12250 | 0.0014 | - |
| 0.9820 | 12300 | 0.0255 | - |
| 0.9860 | 12350 | 0.004 | - |
| 0.9900 | 12400 | 0.002 | - |
| 0.9940 | 12450 | 0.0146 | - |
| 0.9980 | 12500 | 0.0008 | - |
| 1.0 | 12525 | - | 0.0588 |
| 1.0020 | 12550 | 0.0175 | - |
| 1.0060 | 12600 | 0.0001 | - |
| 1.0100 | 12650 | 0.0006 | - |
| 1.0140 | 12700 | 0.0002 | - |
| 1.0180 | 12750 | 0.0149 | - |
| 1.0220 | 12800 | 0.0001 | - |
| 1.0259 | 12850 | 0.0001 | - |
| 1.0299 | 12900 | 0.0001 | - |
| 1.0339 | 12950 | 0.003 | - |
| 1.0379 | 13000 | 0.0003 | - |
| 1.0419 | 13050 | 0.0254 | - |
| 1.0459 | 13100 | 0.0001 | - |
| 1.0499 | 13150 | 0.0001 | - |
| 1.0539 | 13200 | 0.0001 | - |
| 1.0579 | 13250 | 0.0001 | - |
| 1.0619 | 13300 | 0.0003 | - |
| 1.0659 | 13350 | 0.0244 | - |
| 1.0699 | 13400 | 0.0001 | - |
| 1.0739 | 13450 | 0.0001 | - |
| 1.0778 | 13500 | 0.0175 | - |
| 1.0818 | 13550 | 0.0002 | - |
| 1.0858 | 13600 | 0.0002 | - |
| 1.0898 | 13650 | 0.0001 | - |
| 1.0938 | 13700 | 0.0001 | - |
| 1.0978 | 13750 | 0.0002 | - |
| 1.1018 | 13800 | 0.0001 | - |
| 1.1058 | 13850 | 0.0001 | - |
| 1.1098 | 13900 | 0.0001 | - |
| 1.1138 | 13950 | 0.0005 | - |
| 1.1178 | 14000 | 0.0001 | - |
| 1.1218 | 14050 | 0.0001 | - |
| 1.1257 | 14100 | 0.0002 | - |
| 1.1297 | 14150 | 0.0001 | - |
| 1.1337 | 14200 | 0.0002 | - |
| 1.1377 | 14250 | 0.0008 | - |
| 1.1417 | 14300 | 0.0001 | - |
| 1.1457 | 14350 | 0.0001 | - |
| 1.1497 | 14400 | 0.0013 | - |
| 1.1537 | 14450 | 0.0001 | - |
| 1.1577 | 14500 | 0.0001 | - |
| 1.1617 | 14550 | 0.0004 | - |
| 1.1657 | 14600 | 0.0001 | - |
| 1.1697 | 14650 | 0.0001 | - |
| 1.1737 | 14700 | 0.001 | - |
| 1.1776 | 14750 | 0.0156 | - |
| 1.1816 | 14800 | 0.0001 | - |
| 1.1856 | 14850 | 0.0003 | - |
| 1.1896 | 14900 | 0.0045 | - |
| 1.1936 | 14950 | 0.0011 | - |
| 1.1976 | 15000 | 0.0015 | - |
| 1.2016 | 15050 | 0.0017 | - |
| 1.2056 | 15100 | 0.017 | - |
| 1.2096 | 15150 | 0.0001 | - |
| 1.2136 | 15200 | 0.0011 | - |
| 1.2176 | 15250 | 0.0003 | - |
| 1.2216 | 15300 | 0.0001 | - |
| 1.2255 | 15350 | 0.0001 | - |
| 1.2295 | 15400 | 0.0005 | - |
| 1.2335 | 15450 | 0.0144 | - |
| 1.2375 | 15500 | 0.0001 | - |
| 1.2415 | 15550 | 0.0001 | - |
| 1.2455 | 15600 | 0.0191 | - |
| 1.2495 | 15650 | 0.0001 | - |
| 1.2535 | 15700 | 0.0001 | - |
| 1.2575 | 15750 | 0.0001 | - |
| 1.2615 | 15800 | 0.0001 | - |
| 1.2655 | 15850 | 0.0008 | - |
| 1.2695 | 15900 | 0.0005 | - |
| 1.2735 | 15950 | 0.013 | - |
| 1.2774 | 16000 | 0.0001 | - |
| 1.2814 | 16050 | 0.0201 | - |
| 1.2854 | 16100 | 0.0008 | - |
| 1.2894 | 16150 | 0.0001 | - |
| 1.2934 | 16200 | 0.0001 | - |
| 1.2974 | 16250 | 0.0001 | - |
| 1.3014 | 16300 | 0.0001 | - |
| 1.3054 | 16350 | 0.0 | - |
| 1.3094 | 16400 | 0.0118 | - |
| 1.3134 | 16450 | 0.0253 | - |
| 1.3174 | 16500 | 0.0001 | - |
| 1.3214 | 16550 | 0.0012 | - |
| 1.3253 | 16600 | 0.0017 | - |
| 1.3293 | 16650 | 0.0001 | - |
| 1.3333 | 16700 | 0.0094 | - |
| 1.3373 | 16750 | 0.0001 | - |
| 1.3413 | 16800 | 0.0243 | - |
| 1.3453 | 16850 | 0.049 | - |
| 1.3493 | 16900 | 0.0001 | - |
| 1.3533 | 16950 | 0.0247 | - |
| 1.3573 | 17000 | 0.0001 | - |
| 1.3613 | 17050 | 0.0001 | - |
| 1.3653 | 17100 | 0.0001 | - |
| 1.3693 | 17150 | 0.0246 | - |
| 1.3733 | 17200 | 0.0001 | - |
| 1.3772 | 17250 | 0.0245 | - |
| 1.3812 | 17300 | 0.0012 | - |
| 1.3852 | 17350 | 0.0001 | - |
| 1.3892 | 17400 | 0.0001 | - |
| 1.3932 | 17450 | 0.0001 | - |
| 1.3972 | 17500 | 0.0001 | - |
| 1.4012 | 17550 | 0.0001 | - |
| 1.4052 | 17600 | 0.0002 | - |
| 1.4092 | 17650 | 0.0001 | - |
| 1.4132 | 17700 | 0.0001 | - |
| 1.4172 | 17750 | 0.0039 | - |
| 1.4212 | 17800 | 0.0012 | - |
| 1.4251 | 17850 | 0.0043 | - |
| 1.4291 | 17900 | 0.0001 | - |
| 1.4331 | 17950 | 0.0001 | - |
| 1.4371 | 18000 | 0.0001 | - |
| 1.4411 | 18050 | 0.0001 | - |
| 1.4451 | 18100 | 0.0043 | - |
| 1.4491 | 18150 | 0.0023 | - |
| 1.4531 | 18200 | 0.0001 | - |
| 1.4571 | 18250 | 0.0002 | - |
| 1.4611 | 18300 | 0.0001 | - |
| 1.4651 | 18350 | 0.0001 | - |
| 1.4691 | 18400 | 0.0001 | - |
| 1.4731 | 18450 | 0.0099 | - |
| 1.4770 | 18500 | 0.0001 | - |
| 1.4810 | 18550 | 0.0001 | - |
| 1.4850 | 18600 | 0.03 | - |
| 1.4890 | 18650 | 0.0001 | - |
| 1.4930 | 18700 | 0.0014 | - |
| 1.4970 | 18750 | 0.0 | - |
| 1.5010 | 18800 | 0.0246 | - |
| 1.5050 | 18850 | 0.0001 | - |
| 1.5090 | 18900 | 0.0001 | - |
| 1.5130 | 18950 | 0.0001 | - |
| 1.5170 | 19000 | 0.0001 | - |
| 1.5210 | 19050 | 0.0001 | - |
| 1.5250 | 19100 | 0.0001 | - |
| 1.5289 | 19150 | 0.0001 | - |
| 1.5329 | 19200 | 0.0032 | - |
| 1.5369 | 19250 | 0.0001 | - |
| 1.5409 | 19300 | 0.0001 | - |
| 1.5449 | 19350 | 0.0001 | - |
| 1.5489 | 19400 | 0.025 | - |
| 1.5529 | 19450 | 0.0028 | - |
| 1.5569 | 19500 | 0.0028 | - |
| 1.5609 | 19550 | 0.0001 | - |
| 1.5649 | 19600 | 0.005 | - |
| 1.5689 | 19650 | 0.0024 | - |
| 1.5729 | 19700 | 0.0001 | - |
| 1.5768 | 19750 | 0.0 | - |
| 1.5808 | 19800 | 0.0001 | - |
| 1.5848 | 19850 | 0.0001 | - |
| 1.5888 | 19900 | 0.0001 | - |
| 1.5928 | 19950 | 0.0001 | - |
| 1.5968 | 20000 | 0.0001 | - |
| 1.6008 | 20050 | 0.0001 | - |
| 1.6048 | 20100 | 0.0001 | - |
| 1.6088 | 20150 | 0.0001 | - |
| 1.6128 | 20200 | 0.0 | - |
| 1.6168 | 20250 | 0.0001 | - |
| 1.6208 | 20300 | 0.0001 | - |
| 1.6248 | 20350 | 0.0028 | - |
| 1.6287 | 20400 | 0.0019 | - |
| 1.6327 | 20450 | 0.0115 | - |
| 1.6367 | 20500 | 0.0029 | - |
| 1.6407 | 20550 | 0.0035 | - |
| 1.6447 | 20600 | 0.0005 | - |
| 1.6487 | 20650 | 0.0007 | - |
| 1.6527 | 20700 | 0.0374 | - |
| 1.6567 | 20750 | 0.0064 | - |
| 1.6607 | 20800 | 0.004 | - |
| 1.6647 | 20850 | 0.0009 | - |
| 1.6687 | 20900 | 0.0 | - |
| 1.6727 | 20950 | 0.0017 | - |
| 1.6766 | 21000 | 0.0001 | - |
| 1.6806 | 21050 | 0.0001 | - |
| 1.6846 | 21100 | 0.0001 | - |
| 1.6886 | 21150 | 0.0083 | - |
| 1.6926 | 21200 | 0.0001 | - |
| 1.6966 | 21250 | 0.0001 | - |
| 1.7006 | 21300 | 0.0001 | - |
| 1.7046 | 21350 | 0.0009 | - |
| 1.7086 | 21400 | 0.0001 | - |
| 1.7126 | 21450 | 0.0001 | - |
| 1.7166 | 21500 | 0.0015 | - |
| 1.7206 | 21550 | 0.0001 | - |
| 1.7246 | 21600 | 0.0001 | - |
| 1.7285 | 21650 | 0.0001 | - |
| 1.7325 | 21700 | 0.0007 | - |
| 1.7365 | 21750 | 0.0001 | - |
| 1.7405 | 21800 | 0.0213 | - |
| 1.7445 | 21850 | 0.0007 | - |
| 1.7485 | 21900 | 0.0029 | - |
| 1.7525 | 21950 | 0.0007 | - |
| 1.7565 | 22000 | 0.0025 | - |
| 1.7605 | 22050 | 0.0002 | - |
| 1.7645 | 22100 | 0.0173 | - |
| 1.7685 | 22150 | 0.0012 | - |
| 1.7725 | 22200 | 0.0027 | - |
| 1.7764 | 22250 | 0.0009 | - |
| 1.7804 | 22300 | 0.0013 | - |
| 1.7844 | 22350 | 0.024 | - |
| 1.7884 | 22400 | 0.0043 | - |
| 1.7924 | 22450 | 0.0001 | - |
| 1.7964 | 22500 | 0.0001 | - |
| 1.8004 | 22550 | 0.0109 | - |
| 1.8044 | 22600 | 0.0001 | - |
| 1.8084 | 22650 | 0.0002 | - |
| 1.8124 | 22700 | 0.0246 | - |
| 1.8164 | 22750 | 0.0001 | - |
| 1.8204 | 22800 | 0.0002 | - |
| 1.8244 | 22850 | 0.0248 | - |
| 1.8283 | 22900 | 0.0001 | - |
| 1.8323 | 22950 | 0.0119 | - |
| 1.8363 | 23000 | 0.0254 | - |
| 1.8403 | 23050 | 0.1233 | - |
| 1.8443 | 23100 | 0.0003 | - |
| 1.8483 | 23150 | 0.0001 | - |
| 1.8523 | 23200 | 0.0 | - |
| 1.8563 | 23250 | 0.0 | - |
| 1.8603 | 23300 | 0.0001 | - |
| 1.8643 | 23350 | 0.0001 | - |
| 1.8683 | 23400 | 0.0001 | - |
| 1.8723 | 23450 | 0.0001 | - |
| 1.8762 | 23500 | 0.0001 | - |
| 1.8802 | 23550 | 0.0017 | - |
| 1.8842 | 23600 | 0.0 | - |
| 1.8882 | 23650 | 0.0001 | - |
| 1.8922 | 23700 | 0.0 | - |
| 1.8962 | 23750 | 0.0001 | - |
| 1.9002 | 23800 | 0.0055 | - |
| 1.9042 | 23850 | 0.0001 | - |
| 1.9082 | 23900 | 0.0003 | - |
| 1.9122 | 23950 | 0.0002 | - |
| 1.9162 | 24000 | 0.0007 | - |
| 1.9202 | 24050 | 0.0011 | - |
| 1.9242 | 24100 | 0.0001 | - |
| 1.9281 | 24150 | 0.0001 | - |
| 1.9321 | 24200 | 0.0001 | - |
| 1.9361 | 24250 | 0.0001 | - |
| 1.9401 | 24300 | 0.0192 | - |
| 1.9441 | 24350 | 0.0001 | - |
| 1.9481 | 24400 | 0.0008 | - |
| 1.9521 | 24450 | 0.0 | - |
| 1.9561 | 24500 | 0.0004 | - |
| 1.9601 | 24550 | 0.0001 | - |
| 1.9641 | 24600 | 0.0001 | - |
| 1.9681 | 24650 | 0.0001 | - |
| 1.9721 | 24700 | 0.0001 | - |
| 1.9760 | 24750 | 0.0013 | - |
| 1.9800 | 24800 | 0.0129 | - |
| 1.9840 | 24850 | 0.0024 | - |
| 1.9880 | 24900 | 0.0015 | - |
| 1.9920 | 24950 | 0.0196 | - |
| 1.9960 | 25000 | 0.0164 | - |
| 2.0 | 25050 | 0.002 | 0.0769 |
| 2.0040 | 25100 | 0.0001 | - |
| 2.0080 | 25150 | 0.0156 | - |
| 2.0120 | 25200 | 0.0 | - |
| 2.0160 | 25250 | 0.0169 | - |
| 2.0200 | 25300 | 0.0023 | - |
| 2.0240 | 25350 | 0.0001 | - |
| 2.0279 | 25400 | 0.0 | - |
| 2.0319 | 25450 | 0.001 | - |
| 2.0359 | 25500 | 0.0152 | - |
| 2.0399 | 25550 | 0.0001 | - |
| 2.0439 | 25600 | 0.001 | - |
| 2.0479 | 25650 | 0.0001 | - |
| 2.0519 | 25700 | 0.002 | - |
| 2.0559 | 25750 | 0.0006 | - |
| 2.0599 | 25800 | 0.0031 | - |
| 2.0639 | 25850 | 0.0016 | - |
| 2.0679 | 25900 | 0.0 | - |
| 2.0719 | 25950 | 0.0001 | - |
| 2.0758 | 26000 | 0.0001 | - |
| 2.0798 | 26050 | 0.0159 | - |
| 2.0838 | 26100 | 0.0005 | - |
| 2.0878 | 26150 | 0.0001 | - |
| 2.0918 | 26200 | 0.0001 | - |
| 2.0958 | 26250 | 0.0006 | - |
| 2.0998 | 26300 | 0.0008 | - |
| 2.1038 | 26350 | 0.0 | - |
| 2.1078 | 26400 | 0.0 | - |
| 2.1118 | 26450 | 0.0183 | - |
| 2.1158 | 26500 | 0.0001 | - |
| 2.1198 | 26550 | 0.0167 | - |
| 2.1238 | 26600 | 0.0001 | - |
| 2.1277 | 26650 | 0.0014 | - |
| 2.1317 | 26700 | 0.0003 | - |
| 2.1357 | 26750 | 0.0014 | - |
| 2.1397 | 26800 | 0.0001 | - |
| 2.1437 | 26850 | 0.0001 | - |
| 2.1477 | 26900 | 0.0011 | - |
| 2.1517 | 26950 | 0.0 | - |
| 2.1557 | 27000 | 0.0001 | - |
| 2.1597 | 27050 | 0.0001 | - |
| 2.1637 | 27100 | 0.0007 | - |
| 2.1677 | 27150 | 0.0001 | - |
| 2.1717 | 27200 | 0.0 | - |
| 2.1756 | 27250 | 0.0001 | - |
| 2.1796 | 27300 | 0.0005 | - |
| 2.1836 | 27350 | 0.0 | - |
| 2.1876 | 27400 | 0.0002 | - |
| 2.1916 | 27450 | 0.0001 | - |
| 2.1956 | 27500 | 0.0001 | - |
| 2.1996 | 27550 | 0.025 | - |
| 2.2036 | 27600 | 0.0001 | - |
| 2.2076 | 27650 | 0.0001 | - |
| 2.2116 | 27700 | 0.0001 | - |
| 2.2156 | 27750 | 0.0001 | - |
| 2.2196 | 27800 | 0.0001 | - |
| 2.2236 | 27850 | 0.0001 | - |
| 2.2275 | 27900 | 0.0001 | - |
| 2.2315 | 27950 | 0.0 | - |
| 2.2355 | 28000 | 0.0001 | - |
| 2.2395 | 28050 | 0.0001 | - |
| 2.2435 | 28100 | 0.0245 | - |
| 2.2475 | 28150 | 0.0001 | - |
| 2.2515 | 28200 | 0.0251 | - |
| 2.2555 | 28250 | 0.0001 | - |
| 2.2595 | 28300 | 0.0497 | - |
| 2.2635 | 28350 | 0.0002 | - |
| 2.2675 | 28400 | 0.0487 | - |
| 2.2715 | 28450 | 0.0703 | - |
| 2.2754 | 28500 | 0.0248 | - |
| 2.2794 | 28550 | 0.0001 | - |
| 2.2834 | 28600 | 0.0245 | - |
| 2.2874 | 28650 | 0.0004 | - |
| 2.2914 | 28700 | 0.0001 | - |
| 2.2954 | 28750 | 0.0001 | - |
| 2.2994 | 28800 | 0.0 | - |
| 2.3034 | 28850 | 0.0002 | - |
| 2.3074 | 28900 | 0.0489 | - |
| 2.3114 | 28950 | 0.0245 | - |
| 2.3154 | 29000 | 0.0002 | - |
| 2.3194 | 29050 | 0.0001 | - |
| 2.3234 | 29100 | 0.0001 | - |
| 2.3273 | 29150 | 0.0247 | - |
| 2.3313 | 29200 | 0.0003 | - |
| 2.3353 | 29250 | 0.0001 | - |
| 2.3393 | 29300 | 0.0001 | - |
| 2.3433 | 29350 | 0.0246 | - |
| 2.3473 | 29400 | 0.0246 | - |
| 2.3513 | 29450 | 0.0246 | - |
| 2.3553 | 29500 | 0.0001 | - |
| 2.3593 | 29550 | 0.0001 | - |
| 2.3633 | 29600 | 0.0001 | - |
| 2.3673 | 29650 | 0.0246 | - |
| 2.3713 | 29700 | 0.0 | - |
| 2.3752 | 29750 | 0.0246 | - |
| 2.3792 | 29800 | 0.0001 | - |
| 2.3832 | 29850 | 0.0001 | - |
| 2.3872 | 29900 | 0.0001 | - |
| 2.3912 | 29950 | 0.0002 | - |
| 2.3952 | 30000 | 0.0248 | - |
| 2.3992 | 30050 | 0.0002 | - |
| 2.4032 | 30100 | 0.0001 | - |
| 2.4072 | 30150 | 0.0001 | - |
| 2.4112 | 30200 | 0.0001 | - |
| 2.4152 | 30250 | 0.0001 | - |
| 2.4192 | 30300 | 0.0001 | - |
| 2.4232 | 30350 | 0.0245 | - |
| 2.4271 | 30400 | 0.0001 | - |
| 2.4311 | 30450 | 0.0001 | - |
| 2.4351 | 30500 | 0.0001 | - |
| 2.4391 | 30550 | 0.0 | - |
| 2.4431 | 30600 | 0.0001 | - |
| 2.4471 | 30650 | 0.0001 | - |
| 2.4511 | 30700 | 0.0001 | - |
| 2.4551 | 30750 | 0.0001 | - |
| 2.4591 | 30800 | 0.0001 | - |
| 2.4631 | 30850 | 0.0002 | - |
| 2.4671 | 30900 | 0.0001 | - |
| 2.4711 | 30950 | 0.0245 | - |
| 2.4750 | 31000 | 0.0001 | - |
| 2.4790 | 31050 | 0.0249 | - |
| 2.4830 | 31100 | 0.0246 | - |
| 2.4870 | 31150 | 0.0001 | - |
| 2.4910 | 31200 | 0.0246 | - |
| 2.4950 | 31250 | 0.0001 | - |
| 2.4990 | 31300 | 0.0247 | - |
| 2.5030 | 31350 | 0.0001 | - |
| 2.5070 | 31400 | 0.0001 | - |
| 2.5110 | 31450 | 0.0003 | - |
| 2.5150 | 31500 | 0.0002 | - |
| 2.5190 | 31550 | 0.0002 | - |
| 2.5230 | 31600 | 0.0001 | - |
| 2.5269 | 31650 | 0.0001 | - |
| 2.5309 | 31700 | 0.0003 | - |
| 2.5349 | 31750 | 0.0001 | - |
| 2.5389 | 31800 | 0.0001 | - |
| 2.5429 | 31850 | 0.0001 | - |
| 2.5469 | 31900 | 0.0001 | - |
| 2.5509 | 31950 | 0.0493 | - |
| 2.5549 | 32000 | 0.0001 | - |
| 2.5589 | 32050 | 0.0001 | - |
| 2.5629 | 32100 | 0.0003 | - |
| 2.5669 | 32150 | 0.0001 | - |
| 2.5709 | 32200 | 0.025 | - |
| 2.5749 | 32250 | 0.0001 | - |
| 2.5788 | 32300 | 0.0249 | - |
| 2.5828 | 32350 | 0.0001 | - |
| 2.5868 | 32400 | 0.0001 | - |
| 2.5908 | 32450 | 0.0001 | - |
| 2.5948 | 32500 | 0.0001 | - |
| 2.5988 | 32550 | 0.0004 | - |
| 2.6028 | 32600 | 0.0001 | - |
| 2.6068 | 32650 | 0.0001 | - |
| 2.6108 | 32700 | 0.0001 | - |
| 2.6148 | 32750 | 0.0001 | - |
| 2.6188 | 32800 | 0.0001 | - |
| 2.6228 | 32850 | 0.0001 | - |
| 2.6267 | 32900 | 0.0001 | - |
| 2.6307 | 32950 | 0.0492 | - |
| 2.6347 | 33000 | 0.0001 | - |
| 2.6387 | 33050 | 0.0001 | - |
| 2.6427 | 33100 | 0.0 | - |
| 2.6467 | 33150 | 0.0001 | - |
| 2.6507 | 33200 | 0.0247 | - |
| 2.6547 | 33250 | 0.0001 | - |
| 2.6587 | 33300 | 0.0001 | - |
| 2.6627 | 33350 | 0.0001 | - |
| 2.6667 | 33400 | 0.0001 | - |
| 2.6707 | 33450 | 0.0001 | - |
| 2.6747 | 33500 | 0.0001 | - |
| 2.6786 | 33550 | 0.0001 | - |
| 2.6826 | 33600 | 0.0001 | - |
| 2.6866 | 33650 | 0.0002 | - |
| 2.6906 | 33700 | 0.0001 | - |
| 2.6946 | 33750 | 0.0001 | - |
| 2.6986 | 33800 | 0.0001 | - |
| 2.7026 | 33850 | 0.0001 | - |
| 2.7066 | 33900 | 0.0254 | - |
| 2.7106 | 33950 | 0.0001 | - |
| 2.7146 | 34000 | 0.0001 | - |
| 2.7186 | 34050 | 0.0001 | - |
| 2.7226 | 34100 | 0.0001 | - |
| 2.7265 | 34150 | 0.0001 | - |
| 2.7305 | 34200 | 0.0001 | - |
| 2.7345 | 34250 | 0.0002 | - |
| 2.7385 | 34300 | 0.0498 | - |
| 2.7425 | 34350 | 0.0001 | - |
| 2.7465 | 34400 | 0.0001 | - |
| 2.7505 | 34450 | 0.0001 | - |
| 2.7545 | 34500 | 0.0001 | - |
| 2.7585 | 34550 | 0.0248 | - |
| 2.7625 | 34600 | 0.0 | - |
| 2.7665 | 34650 | 0.0001 | - |
| 2.7705 | 34700 | 0.0001 | - |
| 2.7745 | 34750 | 0.0001 | - |
| 2.7784 | 34800 | 0.0001 | - |
| 2.7824 | 34850 | 0.0247 | - |
| 2.7864 | 34900 | 0.0001 | - |
| 2.7904 | 34950 | 0.0001 | - |
| 2.7944 | 35000 | 0.0001 | - |
| 2.7984 | 35050 | 0.0001 | - |
| 2.8024 | 35100 | 0.0001 | - |
| 2.8064 | 35150 | 0.0001 | - |
| 2.8104 | 35200 | 0.0001 | - |
| 2.8144 | 35250 | 0.0001 | - |
| 2.8184 | 35300 | 0.0001 | - |
| 2.8224 | 35350 | 0.0001 | - |
| 2.8263 | 35400 | 0.0 | - |
| 2.8303 | 35450 | 0.0001 | - |
| 2.8343 | 35500 | 0.0 | - |
| 2.8383 | 35550 | 0.0 | - |
| 2.8423 | 35600 | 0.0001 | - |
| 2.8463 | 35650 | 0.0254 | - |
| 2.8503 | 35700 | 0.0001 | - |
| 2.8543 | 35750 | 0.0001 | - |
| 2.8583 | 35800 | 0.0001 | - |
| 2.8623 | 35850 | 0.0 | - |
| 2.8663 | 35900 | 0.0001 | - |
| 2.8703 | 35950 | 0.0001 | - |
| 2.8743 | 36000 | 0.0002 | - |
| 2.8782 | 36050 | 0.0001 | - |
| 2.8822 | 36100 | 0.0001 | - |
| 2.8862 | 36150 | 0.0 | - |
| 2.8902 | 36200 | 0.0001 | - |
| 2.8942 | 36250 | 0.0001 | - |
| 2.8982 | 36300 | 0.0001 | - |
| 2.9022 | 36350 | 0.0001 | - |
| 2.9062 | 36400 | 0.0001 | - |
| 2.9102 | 36450 | 0.0001 | - |
| 2.9142 | 36500 | 0.0001 | - |
| 2.9182 | 36550 | 0.0001 | - |
| 2.9222 | 36600 | 0.0001 | - |
| 2.9261 | 36650 | 0.0002 | - |
| 2.9301 | 36700 | 0.0001 | - |
| 2.9341 | 36750 | 0.0248 | - |
| 2.9381 | 36800 | 0.0245 | - |
| 2.9421 | 36850 | 0.0001 | - |
| 2.9461 | 36900 | 0.0 | - |
| 2.9501 | 36950 | 0.0001 | - |
| 2.9541 | 37000 | 0.0001 | - |
| 2.9581 | 37050 | 0.0001 | - |
| 2.9621 | 37100 | 0.0001 | - |
| 2.9661 | 37150 | 0.0001 | - |
| 2.9701 | 37200 | 0.0001 | - |
| 2.9741 | 37250 | 0.0001 | - |
| 2.9780 | 37300 | 0.0 | - |
| 2.9820 | 37350 | 0.0503 | - |
| 2.9860 | 37400 | 0.0001 | - |
| 2.9900 | 37450 | 0.0246 | - |
| 2.9940 | 37500 | 0.0001 | - |
| 2.9980 | 37550 | 0.0001 | - |
| 3.0 | 37575 | - | 0.0396 |
| 3.0020 | 37600 | 0.0248 | - |
| 3.0060 | 37650 | 0.0001 | - |
| 3.0100 | 37700 | 0.0001 | - |
| 3.0140 | 37750 | 0.0245 | - |
| 3.0180 | 37800 | 0.0002 | - |
| 3.0220 | 37850 | 0.0 | - |
| 3.0259 | 37900 | 0.0001 | - |
| 3.0299 | 37950 | 0.0001 | - |
| 3.0339 | 38000 | 0.0003 | - |
| 3.0379 | 38050 | 0.0001 | - |
| 3.0419 | 38100 | 0.0001 | - |
| 3.0459 | 38150 | 0.0001 | - |
| 3.0499 | 38200 | 0.0001 | - |
| 3.0539 | 38250 | 0.0001 | - |
| 3.0579 | 38300 | 0.0001 | - |
| 3.0619 | 38350 | 0.0002 | - |
| 3.0659 | 38400 | 0.0251 | - |
| 3.0699 | 38450 | 0.0001 | - |
| 3.0739 | 38500 | 0.0001 | - |
| 3.0778 | 38550 | 0.0001 | - |
| 3.0818 | 38600 | 0.0001 | - |
| 3.0858 | 38650 | 0.0001 | - |
| 3.0898 | 38700 | 0.0001 | - |
| 3.0938 | 38750 | 0.0001 | - |
| 3.0978 | 38800 | 0.0001 | - |
| 3.1018 | 38850 | 0.0001 | - |
| 3.1058 | 38900 | 0.0001 | - |
| 3.1098 | 38950 | 0.0 | - |
| 3.1138 | 39000 | 0.0001 | - |
| 3.1178 | 39050 | 0.0001 | - |
| 3.1218 | 39100 | 0.0001 | - |
| 3.1257 | 39150 | 0.0001 | - |
| 3.1297 | 39200 | 0.0001 | - |
| 3.1337 | 39250 | 0.0001 | - |
| 3.1377 | 39300 | 0.0 | - |
| 3.1417 | 39350 | 0.0001 | - |
| 3.1457 | 39400 | 0.0002 | - |
| 3.1497 | 39450 | 0.0001 | - |
| 3.1537 | 39500 | 0.0002 | - |
| 3.1577 | 39550 | 0.0001 | - |
| 3.1617 | 39600 | 0.0717 | - |
| 3.1657 | 39650 | 0.0001 | - |
| 3.1697 | 39700 | 0.0001 | - |
| 3.1737 | 39750 | 0.0004 | - |
| 3.1776 | 39800 | 0.0244 | - |
| 3.1816 | 39850 | 0.0001 | - |
| 3.1856 | 39900 | 0.0239 | - |
| 3.1896 | 39950 | 0.0245 | - |
| 3.1936 | 40000 | 0.0245 | - |
| 3.1976 | 40050 | 0.0001 | - |
| 3.2016 | 40100 | 0.0184 | - |
| 3.2056 | 40150 | 0.0246 | - |
| 3.2096 | 40200 | 0.0001 | - |
| 3.2136 | 40250 | 0.0001 | - |
| 3.2176 | 40300 | 0.0001 | - |
| 3.2216 | 40350 | 0.0001 | - |
| 3.2255 | 40400 | 0.0001 | - |
| 3.2295 | 40450 | 0.0002 | - |
| 3.2335 | 40500 | 0.0248 | - |
| 3.2375 | 40550 | 0.0001 | - |
| 3.2415 | 40600 | 0.0244 | - |
| 3.2455 | 40650 | 0.0002 | - |
| 3.2495 | 40700 | 0.0001 | - |
| 3.2535 | 40750 | 0.0001 | - |
| 3.2575 | 40800 | 0.0 | - |
| 3.2615 | 40850 | 0.0 | - |
| 3.2655 | 40900 | 0.0001 | - |
| 3.2695 | 40950 | 0.0247 | - |
| 3.2735 | 41000 | 0.0001 | - |
| 3.2774 | 41050 | 0.0001 | - |
| 3.2814 | 41100 | 0.0246 | - |
| 3.2854 | 41150 | 0.0001 | - |
| 3.2894 | 41200 | 0.0001 | - |
| 3.2934 | 41250 | 0.0001 | - |
| 3.2974 | 41300 | 0.0001 | - |
| 3.3014 | 41350 | 0.0001 | - |
| 3.3054 | 41400 | 0.0246 | - |
| 3.3094 | 41450 | 0.0246 | - |
| 3.3134 | 41500 | 0.0246 | - |
| 3.3174 | 41550 | 0.0001 | - |
| 3.3214 | 41600 | 0.0003 | - |
| 3.3253 | 41650 | 0.0001 | - |
| 3.3293 | 41700 | 0.0001 | - |
| 3.3333 | 41750 | 0.025 | - |
| 3.3373 | 41800 | 0.0 | - |
| 3.3413 | 41850 | 0.0245 | - |
| 3.3453 | 41900 | 0.0001 | - |
| 3.3493 | 41950 | 0.0246 | - |
| 3.3533 | 42000 | 0.0001 | - |
| 3.3573 | 42050 | 0.0001 | - |
| 3.3613 | 42100 | 0.0001 | - |
| 3.3653 | 42150 | 0.0001 | - |
| 3.3693 | 42200 | 0.0248 | - |
| 3.3733 | 42250 | 0.0245 | - |
| 3.3772 | 42300 | 0.0001 | - |
| 3.3812 | 42350 | 0.0 | - |
| 3.3852 | 42400 | 0.0001 | - |
| 3.3892 | 42450 | 0.0001 | - |
| 3.3932 | 42500 | 0.0001 | - |
| 3.3972 | 42550 | 0.0001 | - |
| 3.4012 | 42600 | 0.0001 | - |
| 3.4052 | 42650 | 0.0001 | - |
| 3.4092 | 42700 | 0.0001 | - |
| 3.4132 | 42750 | 0.0001 | - |
| 3.4172 | 42800 | 0.0001 | - |
| 3.4212 | 42850 | 0.0 | - |
| 3.4251 | 42900 | 0.0 | - |
| 3.4291 | 42950 | 0.0001 | - |
| 3.4331 | 43000 | 0.0001 | - |
| 3.4371 | 43050 | 0.0001 | - |
| 3.4411 | 43100 | 0.0001 | - |
| 3.4451 | 43150 | 0.0002 | - |
| 3.4491 | 43200 | 0.0001 | - |
| 3.4531 | 43250 | 0.0002 | - |
| 3.4571 | 43300 | 0.0001 | - |
| 3.4611 | 43350 | 0.0 | - |
| 3.4651 | 43400 | 0.0001 | - |
| 3.4691 | 43450 | 0.0246 | - |
| 3.4731 | 43500 | 0.0001 | - |
| 3.4770 | 43550 | 0.0001 | - |
| 3.4810 | 43600 | 0.0246 | - |
| 3.4850 | 43650 | 0.0001 | - |
| 3.4890 | 43700 | 0.0001 | - |
| 3.4930 | 43750 | 0.0001 | - |
| 3.4970 | 43800 | 0.0245 | - |
| 3.5010 | 43850 | 0.0001 | - |
| 3.5050 | 43900 | 0.0001 | - |
| 3.5090 | 43950 | 0.0001 | - |
| 3.5130 | 44000 | 0.0001 | - |
| 3.5170 | 44050 | 0.0001 | - |
| 3.5210 | 44100 | 0.0001 | - |
| 3.5250 | 44150 | 0.0001 | - |
| 3.5289 | 44200 | 0.0001 | - |
| 3.5329 | 44250 | 0.0001 | - |
| 3.5369 | 44300 | 0.0 | - |
| 3.5409 | 44350 | 0.0001 | - |
| 3.5449 | 44400 | 0.0 | - |
| 3.5489 | 44450 | 0.0249 | - |
| 3.5529 | 44500 | 0.0 | - |
| 3.5569 | 44550 | 0.0002 | - |
| 3.5609 | 44600 | 0.0001 | - |
| 3.5649 | 44650 | 0.0002 | - |
| 3.5689 | 44700 | 0.0001 | - |
| 3.5729 | 44750 | 0.0 | - |
| 3.5768 | 44800 | 0.0 | - |
| 3.5808 | 44850 | 0.0 | - |
| 3.5848 | 44900 | 0.0 | - |
| 3.5888 | 44950 | 0.0 | - |
| 3.5928 | 45000 | 0.0001 | - |
| 3.5968 | 45050 | 0.0001 | - |
| 3.6008 | 45100 | 0.0001 | - |
| 3.6048 | 45150 | 0.0001 | - |
| 3.6088 | 45200 | 0.0002 | - |
| 3.6128 | 45250 | 0.0 | - |
| 3.6168 | 45300 | 0.0001 | - |
| 3.6208 | 45350 | 0.0001 | - |
| 3.6248 | 45400 | 0.0001 | - |
| 3.6287 | 45450 | 0.0244 | - |
| 3.6327 | 45500 | 0.0 | - |
| 3.6367 | 45550 | 0.0001 | - |
| 3.6407 | 45600 | 0.0 | - |
| 3.6447 | 45650 | 0.0001 | - |
| 3.6487 | 45700 | 0.0243 | - |
| 3.6527 | 45750 | 0.0252 | - |
| 3.6567 | 45800 | 0.0001 | - |
| 3.6607 | 45850 | 0.0001 | - |
| 3.6647 | 45900 | 0.0001 | - |
| 3.6687 | 45950 | 0.0 | - |
| 3.6727 | 46000 | 0.0001 | - |
| 3.6766 | 46050 | 0.0001 | - |
| 3.6806 | 46100 | 0.0002 | - |
| 3.6846 | 46150 | 0.0 | - |
| 3.6886 | 46200 | 0.0247 | - |
| 3.6926 | 46250 | 0.0 | - |
| 3.6966 | 46300 | 0.0001 | - |
| 3.7006 | 46350 | 0.0 | - |
| 3.7046 | 46400 | 0.0001 | - |
| 3.7086 | 46450 | 0.0001 | - |
| 3.7126 | 46500 | 0.0001 | - |
| 3.7166 | 46550 | 0.0 | - |
| 3.7206 | 46600 | 0.0001 | - |
| 3.7246 | 46650 | 0.0 | - |
| 3.7285 | 46700 | 0.0001 | - |
| 3.7325 | 46750 | 0.0001 | - |
| 3.7365 | 46800 | 0.0246 | - |
| 3.7405 | 46850 | 0.0 | - |
| 3.7445 | 46900 | 0.0001 | - |
| 3.7485 | 46950 | 0.0001 | - |
| 3.7525 | 47000 | 0.0001 | - |
| 3.7565 | 47050 | 0.0001 | - |
| 3.7605 | 47100 | 0.0001 | - |
| 3.7645 | 47150 | 0.025 | - |
| 3.7685 | 47200 | 0.0001 | - |
| 3.7725 | 47250 | 0.0002 | - |
| 3.7764 | 47300 | 0.0001 | - |
| 3.7804 | 47350 | 0.0247 | - |
| 3.7844 | 47400 | 0.0248 | - |
| 3.7884 | 47450 | 0.0001 | - |
| 3.7924 | 47500 | 0.0 | - |
| 3.7964 | 47550 | 0.0001 | - |
| 3.8004 | 47600 | 0.025 | - |
| 3.8044 | 47650 | 0.0001 | - |
| 3.8084 | 47700 | 0.0001 | - |
| 3.8124 | 47750 | 0.0002 | - |
| 3.8164 | 47800 | 0.0001 | - |
| 3.8204 | 47850 | 0.0001 | - |
| 3.8244 | 47900 | 0.0252 | - |
| 3.8283 | 47950 | 0.0001 | - |
| 3.8323 | 48000 | 0.0254 | - |
| 3.8363 | 48050 | 0.0249 | - |
| 3.8403 | 48100 | 0.0001 | - |
| 3.8443 | 48150 | 0.0001 | - |
| 3.8483 | 48200 | 0.0001 | - |
| 3.8523 | 48250 | 0.0 | - |
| 3.8563 | 48300 | 0.0001 | - |
| 3.8603 | 48350 | 0.0001 | - |
| 3.8643 | 48400 | 0.0001 | - |
| 3.8683 | 48450 | 0.0001 | - |
| 3.8723 | 48500 | 0.0001 | - |
| 3.8762 | 48550 | 0.0006 | - |
| 3.8802 | 48600 | 0.0003 | - |
| 3.8842 | 48650 | 0.0 | - |
| 3.8882 | 48700 | 0.0003 | - |
| 3.8922 | 48750 | 0.0001 | - |
| 3.8962 | 48800 | 0.0001 | - |
| 3.9002 | 48850 | 0.0001 | - |
| 3.9042 | 48900 | 0.0001 | - |
| 3.9082 | 48950 | 0.0001 | - |
| 3.9122 | 49000 | 0.0001 | - |
| 3.9162 | 49050 | 0.0246 | - |
| 3.9202 | 49100 | 0.0 | - |
| 3.9242 | 49150 | 0.0001 | - |
| 3.9281 | 49200 | 0.0001 | - |
| 3.9321 | 49250 | 0.0001 | - |
| 3.9361 | 49300 | 0.0246 | - |
| 3.9401 | 49350 | 0.0 | - |
| 3.9441 | 49400 | 0.0001 | - |
| 3.9481 | 49450 | 0.0002 | - |
| 3.9521 | 49500 | 0.0 | - |
| 3.9561 | 49550 | 0.0 | - |
| 3.9601 | 49600 | 0.0002 | - |
| 3.9641 | 49650 | 0.0248 | - |
| 3.9681 | 49700 | 0.0001 | - |
| 3.9721 | 49750 | 0.0001 | - |
| 3.9760 | 49800 | 0.0001 | - |
| 3.9800 | 49850 | 0.0248 | - |
| 3.9840 | 49900 | 0.0001 | - |
| 3.9880 | 49950 | 0.0245 | - |
| 3.9920 | 50000 | 0.0001 | - |
| 3.9960 | 50050 | 0.0487 | - |
| 4.0 | 50100 | 0.0002 | 0.0927 |
| 4.0040 | 50150 | 0.0001 | - |
| 4.0080 | 50200 | 0.0251 | - |
| 4.0120 | 50250 | 0.0245 | - |
| 4.0160 | 50300 | 0.0001 | - |
| 4.0200 | 50350 | 0.0001 | - |
| 4.0240 | 50400 | 0.0001 | - |
| 4.0279 | 50450 | 0.0 | - |
| 4.0319 | 50500 | 0.0001 | - |
| 4.0359 | 50550 | 0.0255 | - |
| 4.0399 | 50600 | 0.0001 | - |
| 4.0439 | 50650 | 0.0 | - |
| 4.0479 | 50700 | 0.0001 | - |
| 4.0519 | 50750 | 0.0001 | - |
| 4.0559 | 50800 | 0.0 | - |
| 4.0599 | 50850 | 0.0 | - |
| 4.0639 | 50900 | 0.0001 | - |
| 4.0679 | 50950 | 0.0001 | - |
| 4.0719 | 51000 | 0.0001 | - |
| 4.0758 | 51050 | 0.0001 | - |
| 4.0798 | 51100 | 0.0242 | - |
| 4.0838 | 51150 | 0.0001 | - |
| 4.0878 | 51200 | 0.0001 | - |
| 4.0918 | 51250 | 0.0001 | - |
| 4.0958 | 51300 | 0.0 | - |
| 4.0998 | 51350 | 0.0001 | - |
| 4.1038 | 51400 | 0.0 | - |
| 4.1078 | 51450 | 0.0 | - |
| 4.1118 | 51500 | 0.0246 | - |
| 4.1158 | 51550 | 0.0 | - |
| 4.1198 | 51600 | 0.0249 | - |
| 4.1238 | 51650 | 0.0001 | - |
| 4.1277 | 51700 | 0.0001 | - |
| 4.1317 | 51750 | 0.0001 | - |
| 4.1357 | 51800 | 0.0001 | - |
| 4.1397 | 51850 | 0.0 | - |
| 4.1437 | 51900 | 0.0001 | - |
| 4.1477 | 51950 | 0.0 | - |
| 4.1517 | 52000 | 0.0001 | - |
| 4.1557 | 52050 | 0.0001 | - |
| 4.1597 | 52100 | 0.0001 | - |
| 4.1637 | 52150 | 0.0001 | - |
| 4.1677 | 52200 | 0.0001 | - |
| 4.1717 | 52250 | 0.0001 | - |
| 4.1756 | 52300 | 0.0 | - |
| 4.1796 | 52350 | 0.0001 | - |
| 4.1836 | 52400 | 0.0001 | - |
| 4.1876 | 52450 | 0.0 | - |
| 4.1916 | 52500 | 0.0001 | - |
| 4.1956 | 52550 | 0.0001 | - |
| 4.1996 | 52600 | 0.0252 | - |
| 4.2036 | 52650 | 0.0001 | - |
| 4.2076 | 52700 | 0.0001 | - |
| 4.2116 | 52750 | 0.0001 | - |
| 4.2156 | 52800 | 0.0001 | - |
| 4.2196 | 52850 | 0.0001 | - |
| 4.2236 | 52900 | 0.0001 | - |
| 4.2275 | 52950 | 0.0 | - |
| 4.2315 | 53000 | 0.0 | - |
| 4.2355 | 53050 | 0.0001 | - |
| 4.2395 | 53100 | 0.0244 | - |
| 4.2435 | 53150 | 0.0001 | - |
| 4.2475 | 53200 | 0.0001 | - |
| 4.2515 | 53250 | 0.0248 | - |
| 4.2555 | 53300 | 0.0001 | - |
| 4.2595 | 53350 | 0.0 | - |
| 4.2635 | 53400 | 0.0 | - |
| 4.2675 | 53450 | 0.0245 | - |
| 4.2715 | 53500 | 0.0 | - |
| 4.2754 | 53550 | 0.0251 | - |
| 4.2794 | 53600 | 0.0 | - |
| 4.2834 | 53650 | 0.0001 | - |
| 4.2874 | 53700 | 0.0001 | - |
| 4.2914 | 53750 | 0.0001 | - |
| 4.2954 | 53800 | 0.0 | - |
| 4.2994 | 53850 | 0.0 | - |
| 4.3034 | 53900 | 0.0247 | - |
| 4.3074 | 53950 | 0.049 | - |
| 4.3114 | 54000 | 0.0 | - |
| 4.3154 | 54050 | 0.0001 | - |
| 4.3194 | 54100 | 0.0 | - |
| 4.3234 | 54150 | 0.0001 | - |
| 4.3273 | 54200 | 0.0001 | - |
| 4.3313 | 54250 | 0.0001 | - |
| 4.3353 | 54300 | 0.0001 | - |
| 4.3393 | 54350 | 0.0243 | - |
| 4.3433 | 54400 | 0.0001 | - |
| 4.3473 | 54450 | 0.0246 | - |
| 4.3513 | 54500 | 0.0 | - |
| 4.3553 | 54550 | 0.0001 | - |
| 4.3593 | 54600 | 0.0001 | - |
| 4.3633 | 54650 | 0.0001 | - |
| 4.3673 | 54700 | 0.0 | - |
| 4.3713 | 54750 | 0.0246 | - |
| 4.3752 | 54800 | 0.0 | - |
| 4.3792 | 54850 | 0.0 | - |
| 4.3832 | 54900 | 0.0001 | - |
| 4.3872 | 54950 | 0.0001 | - |
| 4.3912 | 55000 | 0.0001 | - |
| 4.3952 | 55050 | 0.0001 | - |
| 4.3992 | 55100 | 0.0001 | - |
| 4.4032 | 55150 | 0.0 | - |
| 4.4072 | 55200 | 0.0001 | - |
| 4.4112 | 55250 | 0.0 | - |
| 4.4152 | 55300 | 0.0 | - |
| 4.4192 | 55350 | 0.0001 | - |
| 4.4232 | 55400 | 0.0244 | - |
| 4.4271 | 55450 | 0.0 | - |
| 4.4311 | 55500 | 0.0 | - |
| 4.4351 | 55550 | 0.0001 | - |
| 4.4391 | 55600 | 0.0 | - |
| 4.4431 | 55650 | 0.0001 | - |
| 4.4471 | 55700 | 0.0001 | - |
| 4.4511 | 55750 | 0.0959 | - |
| 4.4551 | 55800 | 0.0002 | - |
| 4.4591 | 55850 | 0.0001 | - |
| 4.4631 | 55900 | 0.0001 | - |
| 4.4671 | 55950 | 0.0246 | - |
| 4.4711 | 56000 | 0.0001 | - |
| 4.4750 | 56050 | 0.0001 | - |
| 4.4790 | 56100 | 0.0246 | - |
| 4.4830 | 56150 | 0.024 | - |
| 4.4870 | 56200 | 0.0001 | - |
| 4.4910 | 56250 | 0.0001 | - |
| 4.4950 | 56300 | 0.0245 | - |
| 4.4990 | 56350 | 0.0001 | - |
| 4.5030 | 56400 | 0.0001 | - |
| 4.5070 | 56450 | 0.0001 | - |
| 4.5110 | 56500 | 0.0001 | - |
| 4.5150 | 56550 | 0.0001 | - |
| 4.5190 | 56600 | 0.0001 | - |
| 4.5230 | 56650 | 0.0001 | - |
| 4.5269 | 56700 | 0.0 | - |
| 4.5309 | 56750 | 0.0002 | - |
| 4.5349 | 56800 | 0.0001 | - |
| 4.5389 | 56850 | 0.0001 | - |
| 4.5429 | 56900 | 0.0001 | - |
| 4.5469 | 56950 | 0.0001 | - |
| 4.5509 | 57000 | 0.0 | - |
| 4.5549 | 57050 | 0.0001 | - |
| 4.5589 | 57100 | 0.0001 | - |
| 4.5629 | 57150 | 0.0001 | - |
| 4.5669 | 57200 | 0.0 | - |
| 4.5709 | 57250 | 0.0001 | - |
| 4.5749 | 57300 | 0.0001 | - |
| 4.5788 | 57350 | 0.0252 | - |
| 4.5828 | 57400 | 0.0 | - |
| 4.5868 | 57450 | 0.0 | - |
| 4.5908 | 57500 | 0.0001 | - |
| 4.5948 | 57550 | 0.0001 | - |
| 4.5988 | 57600 | 0.0001 | - |
| 4.6028 | 57650 | 0.0001 | - |
| 4.6068 | 57700 | 0.0 | - |
| 4.6108 | 57750 | 0.0 | - |
| 4.6148 | 57800 | 0.0001 | - |
| 4.6188 | 57850 | 0.0001 | - |
| 4.6228 | 57900 | 0.0 | - |
| 4.6267 | 57950 | 0.0244 | - |
| 4.6307 | 58000 | 0.0416 | - |
| 4.6347 | 58050 | 0.0001 | - |
| 4.6387 | 58100 | 0.0 | - |
| 4.6427 | 58150 | 0.0 | - |
| 4.6467 | 58200 | 0.0245 | - |
| 4.6507 | 58250 | 0.0001 | - |
| 4.6547 | 58300 | 0.0001 | - |
| 4.6587 | 58350 | 0.0 | - |
| 4.6627 | 58400 | 0.0001 | - |
| 4.6667 | 58450 | 0.0001 | - |
| 4.6707 | 58500 | 0.0001 | - |
| 4.6747 | 58550 | 0.0001 | - |
| 4.6786 | 58600 | 0.0001 | - |
| 4.6826 | 58650 | 0.0001 | - |
| 4.6866 | 58700 | 0.0002 | - |
| 4.6906 | 58750 | 0.0 | - |
| 4.6946 | 58800 | 0.0001 | - |
| 4.6986 | 58850 | 0.0001 | - |
| 4.7026 | 58900 | 0.0001 | - |
| 4.7066 | 58950 | 0.0253 | - |
| 4.7106 | 59000 | 0.0001 | - |
| 4.7146 | 59050 | 0.0 | - |
| 4.7186 | 59100 | 0.0001 | - |
| 4.7226 | 59150 | 0.0 | - |
| 4.7265 | 59200 | 0.0001 | - |
| 4.7305 | 59250 | 0.0001 | - |
| 4.7345 | 59300 | 0.0246 | - |
| 4.7385 | 59350 | 0.0252 | - |
| 4.7425 | 59400 | 0.0001 | - |
| 4.7465 | 59450 | 0.1531 | - |
| 4.7505 | 59500 | 0.0001 | - |
| 4.7545 | 59550 | 0.0001 | - |
| 4.7585 | 59600 | 0.025 | - |
| 4.7625 | 59650 | 0.0 | - |
| 4.7665 | 59700 | 0.0001 | - |
| 4.7705 | 59750 | 0.0001 | - |
| 4.7745 | 59800 | 0.0001 | - |
| 4.7784 | 59850 | 0.0244 | - |
| 4.7824 | 59900 | 0.0009 | - |
| 4.7864 | 59950 | 0.0001 | - |
| 4.7904 | 60000 | 0.0009 | - |
| 4.7944 | 60050 | 0.0015 | - |
| 4.7984 | 60100 | 0.0252 | - |
| 4.8024 | 60150 | 0.0001 | - |
| 4.8064 | 60200 | 0.0245 | - |
| 4.8104 | 60250 | 0.0003 | - |
| 4.8144 | 60300 | 0.0002 | - |
| 4.8184 | 60350 | 0.0001 | - |
| 4.8224 | 60400 | 0.0001 | - |
| 4.8263 | 60450 | 0.0249 | - |
| 4.8303 | 60500 | 0.0002 | - |
| 4.8343 | 60550 | 0.0001 | - |
| 4.8383 | 60600 | 0.0001 | - |
| 4.8423 | 60650 | 0.0001 | - |
| 4.8463 | 60700 | 0.0242 | - |
| 4.8503 | 60750 | 0.0001 | - |
| 4.8543 | 60800 | 0.0002 | - |
| 4.8583 | 60850 | 0.0001 | - |
| 4.8623 | 60900 | 0.0001 | - |
| 4.8663 | 60950 | 0.0005 | - |
| 4.8703 | 61000 | 0.0001 | - |
| 4.8743 | 61050 | 0.0247 | - |
| 4.8782 | 61100 | 0.0001 | - |
| 4.8822 | 61150 | 0.0001 | - |
| 4.8862 | 61200 | 0.0001 | - |
| 4.8902 | 61250 | 0.0001 | - |
| 4.8942 | 61300 | 0.0247 | - |
| 4.8982 | 61350 | 0.0245 | - |
| 4.9022 | 61400 | 0.0001 | - |
| 4.9062 | 61450 | 0.0001 | - |
| 4.9102 | 61500 | 0.0249 | - |
| 4.9142 | 61550 | 0.2375 | - |
| 4.9182 | 61600 | 0.0001 | - |
| 4.9222 | 61650 | 0.0001 | - |
| 4.9261 | 61700 | 0.0002 | - |
| 4.9301 | 61750 | 0.0001 | - |
| 4.9341 | 61800 | 0.0494 | - |
| 4.9381 | 61850 | 0.0001 | - |
| 4.9421 | 61900 | 0.0001 | - |
| 4.9461 | 61950 | 0.0001 | - |
| 4.9501 | 62000 | 0.0001 | - |
| 4.9541 | 62050 | 0.0001 | - |
| 4.9581 | 62100 | 0.024 | - |
| 4.9621 | 62150 | 0.0001 | - |
| 4.9661 | 62200 | 0.0001 | - |
| 4.9701 | 62250 | 0.0001 | - |
| 4.9741 | 62300 | 0.0002 | - |
| 4.9780 | 62350 | 0.0001 | - |
| 4.9820 | 62400 | 0.0502 | - |
| 4.9860 | 62450 | 0.0244 | - |
| 4.9900 | 62500 | 0.0001 | - |
| 4.9940 | 62550 | 0.0001 | - |
| 4.9980 | 62600 | 0.0001 | - |
| 5.0 | 62625 | - | 0.1298 |
| 5.0020 | 62650 | 0.025 | - |
| 5.0060 | 62700 | 0.0001 | - |
| 5.0100 | 62750 | 0.0244 | - |
| 5.0140 | 62800 | 0.0001 | - |
| 5.0180 | 62850 | 0.0001 | - |
| 5.0220 | 62900 | 0.0001 | - |
| 5.0259 | 62950 | 0.0001 | - |
| 5.0299 | 63000 | 0.0001 | - |
| 5.0339 | 63050 | 0.0002 | - |
| 5.0379 | 63100 | 0.0001 | - |
| 5.0419 | 63150 | 0.0001 | - |
| 5.0459 | 63200 | 0.0001 | - |
| 5.0499 | 63250 | 0.0001 | - |
| 5.0539 | 63300 | 0.0001 | - |
| 5.0579 | 63350 | 0.0001 | - |
| 5.0619 | 63400 | 0.0001 | - |
| 5.0659 | 63450 | 0.0249 | - |
| 5.0699 | 63500 | 0.0001 | - |
| 5.0739 | 63550 | 0.0001 | - |
| 5.0778 | 63600 | 0.0002 | - |
| 5.0818 | 63650 | 0.0001 | - |
| 5.0858 | 63700 | 0.0001 | - |
| 5.0898 | 63750 | 0.0001 | - |
| 5.0938 | 63800 | 0.0001 | - |
| 5.0978 | 63850 | 0.0001 | - |
| 5.1018 | 63900 | 0.0001 | - |
| 5.1058 | 63950 | 0.0001 | - |
| 5.1098 | 64000 | 0.0001 | - |
| 5.1138 | 64050 | 0.0001 | - |
| 5.1178 | 64100 | 0.0001 | - |
| 5.1218 | 64150 | 0.0001 | - |
| 5.1257 | 64200 | 0.0001 | - |
| 5.1297 | 64250 | 0.0001 | - |
| 5.1337 | 64300 | 0.0002 | - |
| 5.1377 | 64350 | 0.0001 | - |
| 5.1417 | 64400 | 0.0001 | - |
| 5.1457 | 64450 | 0.0002 | - |
| 5.1497 | 64500 | 0.0001 | - |
| 5.1537 | 64550 | 0.0001 | - |
| 5.1577 | 64600 | 0.0001 | - |
| 5.1617 | 64650 | 0.0003 | - |
| 5.1657 | 64700 | 0.0001 | - |
| 5.1697 | 64750 | 0.0001 | - |
| 5.1737 | 64800 | 0.0001 | - |
| 5.1776 | 64850 | 0.0243 | - |
| 5.1816 | 64900 | 0.0001 | - |
| 5.1856 | 64950 | 0.0003 | - |
| 5.1896 | 65000 | 0.0001 | - |
| 5.1936 | 65050 | 0.0001 | - |
| 5.1976 | 65100 | 0.0001 | - |
| 5.2016 | 65150 | 0.0001 | - |
| 5.2056 | 65200 | 0.0245 | - |
| 5.2096 | 65250 | 0.0001 | - |
| 5.2136 | 65300 | 0.0001 | - |
| 5.2176 | 65350 | 0.0001 | - |
| 5.2216 | 65400 | 0.0001 | - |
| 5.2255 | 65450 | 0.0001 | - |
| 5.2295 | 65500 | 0.0002 | - |
| 5.2335 | 65550 | 0.0248 | - |
| 5.2375 | 65600 | 0.0243 | - |
| 5.2415 | 65650 | 0.0001 | - |
| 5.2455 | 65700 | 0.0002 | - |
| 5.2495 | 65750 | 0.0001 | - |
| 5.2535 | 65800 | 0.0001 | - |
| 5.2575 | 65850 | 0.0 | - |
| 5.2615 | 65900 | 0.0001 | - |
| 5.2655 | 65950 | 0.0245 | - |
| 5.2695 | 66000 | 0.0001 | - |
| 5.2735 | 66050 | 0.0001 | - |
| 5.2774 | 66100 | 0.0001 | - |
| 5.2814 | 66150 | 0.0244 | - |
| 5.2854 | 66200 | 0.0001 | - |
| 5.2894 | 66250 | 0.0001 | - |
| 5.2934 | 66300 | 0.0001 | - |
| 5.2974 | 66350 | 0.0001 | - |
| 5.3014 | 66400 | 0.0247 | - |
| 5.3054 | 66450 | 0.0244 | - |
| 5.3094 | 66500 | 0.0001 | - |
| 5.3134 | 66550 | 0.0248 | - |
| 5.3174 | 66600 | 0.0001 | - |
| 5.3214 | 66650 | 0.0003 | - |
| 5.3253 | 66700 | 0.0001 | - |
| 5.3293 | 66750 | 0.0001 | - |
| 5.3333 | 66800 | 0.0249 | - |
| 5.3373 | 66850 | 0.0244 | - |
| 5.3413 | 66900 | 0.0001 | - |
| 5.3453 | 66950 | 0.0246 | - |
| 5.3493 | 67000 | 0.0 | - |
| 5.3533 | 67050 | 0.0001 | - |
| 5.3573 | 67100 | 0.0001 | - |
| 5.3613 | 67150 | 0.0001 | - |
| 5.3653 | 67200 | 0.0001 | - |
| 5.3693 | 67250 | 0.0494 | - |
| 5.3733 | 67300 | 0.0001 | - |
| 5.3772 | 67350 | 0.0001 | - |
| 5.3812 | 67400 | 0.0001 | - |
| 5.3852 | 67450 | 0.0001 | - |
| 5.3892 | 67500 | 0.0001 | - |
| 5.3932 | 67550 | 0.0001 | - |
| 5.3972 | 67600 | 0.0001 | - |
| 5.4012 | 67650 | 0.0001 | - |
| 5.4052 | 67700 | 0.0001 | - |
| 5.4092 | 67750 | 0.0001 | - |
| 5.4132 | 67800 | 0.0001 | - |
| 5.4172 | 67850 | 0.0001 | - |
| 5.4212 | 67900 | 0.0 | - |
| 5.4251 | 67950 | 0.0 | - |
| 5.4291 | 68000 | 0.0001 | - |
| 5.4331 | 68050 | 0.0001 | - |
| 5.4371 | 68100 | 0.0001 | - |
| 5.4411 | 68150 | 0.0001 | - |
| 5.4451 | 68200 | 0.0002 | - |
| 5.4491 | 68250 | 0.0001 | - |
| 5.4531 | 68300 | 0.0001 | - |
| 5.4571 | 68350 | 0.0001 | - |
| 5.4611 | 68400 | 0.0001 | - |
| 5.4651 | 68450 | 0.0245 | - |
| 5.4691 | 68500 | 0.0001 | - |
| 5.4731 | 68550 | 0.0001 | - |
| 5.4770 | 68600 | 0.0245 | - |
| 5.4810 | 68650 | 0.0001 | - |
| 5.4850 | 68700 | 0.0001 | - |
| 5.4890 | 68750 | 0.0001 | - |
| 5.4930 | 68800 | 0.0246 | - |
| 5.4970 | 68850 | 0.0 | - |
| 5.5010 | 68900 | 0.0001 | - |
| 5.5050 | 68950 | 0.0001 | - |
| 5.5090 | 69000 | 0.0001 | - |
| 5.5130 | 69050 | 0.0001 | - |
| 5.5170 | 69100 | 0.0001 | - |
| 5.5210 | 69150 | 0.0001 | - |
| 5.5250 | 69200 | 0.0001 | - |
| 5.5289 | 69250 | 0.0001 | - |
| 5.5329 | 69300 | 0.0001 | - |
| 5.5369 | 69350 | 0.0 | - |
| 5.5409 | 69400 | 0.0001 | - |
| 5.5449 | 69450 | 0.0001 | - |
| 5.5489 | 69500 | 0.0249 | - |
| 5.5529 | 69550 | 0.0 | - |
| 5.5569 | 69600 | 0.0001 | - |
| 5.5609 | 69650 | 0.0001 | - |
| 5.5649 | 69700 | 0.0016 | - |
| 5.5689 | 69750 | 0.0001 | - |
| 5.5729 | 69800 | 0.0 | - |
| 5.5768 | 69850 | 0.0 | - |
| 5.5808 | 69900 | 0.0 | - |
| 5.5848 | 69950 | 0.0 | - |
| 5.5888 | 70000 | 0.0 | - |
| 5.5928 | 70050 | 0.0001 | - |
| 5.5968 | 70100 | 0.0001 | - |
| 5.6008 | 70150 | 0.0001 | - |
| 5.6048 | 70200 | 0.0001 | - |
| 5.6088 | 70250 | 0.0001 | - |
| 5.6128 | 70300 | 0.0 | - |
| 5.6168 | 70350 | 0.0 | - |
| 5.6208 | 70400 | 0.0043 | - |
| 5.6248 | 70450 | 0.0151 | - |
| 5.6287 | 70500 | 0.0 | - |
| 5.6327 | 70550 | 0.0 | - |
| 5.6367 | 70600 | 0.0 | - |
| 5.6407 | 70650 | 0.0033 | - |
| 5.6447 | 70700 | 0.0274 | - |
| 5.6487 | 70750 | 0.0031 | - |
| 5.6527 | 70800 | 0.0248 | - |
| 5.6567 | 70850 | 0.0 | - |
| 5.6607 | 70900 | 0.0245 | - |
| 5.6647 | 70950 | 0.0248 | - |
| 5.6687 | 71000 | 0.0 | - |
| 5.6727 | 71050 | 0.0001 | - |
| 5.6766 | 71100 | 0.0001 | - |
| 5.6806 | 71150 | 0.0001 | - |
| 5.6846 | 71200 | 0.0001 | - |
| 5.6886 | 71250 | 0.0002 | - |
| 5.6926 | 71300 | 0.0 | - |
| 5.6966 | 71350 | 0.0001 | - |
| 5.7006 | 71400 | 0.0001 | - |
| 5.7046 | 71450 | 0.0001 | - |
| 5.7086 | 71500 | 0.0001 | - |
| 5.7126 | 71550 | 0.0001 | - |
| 5.7166 | 71600 | 0.0247 | - |
| 5.7206 | 71650 | 0.0001 | - |
| 5.7246 | 71700 | 0.0001 | - |
| 5.7285 | 71750 | 0.0001 | - |
| 5.7325 | 71800 | 0.0001 | - |
| 5.7365 | 71850 | 0.0001 | - |
| 5.7405 | 71900 | 0.0245 | - |
| 5.7445 | 71950 | 0.0001 | - |
| 5.7485 | 72000 | 0.0247 | - |
| 5.7525 | 72050 | 0.0001 | - |
| 5.7565 | 72100 | 0.0 | - |
| 5.7605 | 72150 | 0.0001 | - |
| 5.7645 | 72200 | 0.05 | - |
| 5.7685 | 72250 | 0.0001 | - |
| 5.7725 | 72300 | 0.0247 | - |
| 5.7764 | 72350 | 0.0002 | - |
| 5.7804 | 72400 | 0.0244 | - |
| 5.7844 | 72450 | 0.0 | - |
| 5.7884 | 72500 | 0.0001 | - |
| 5.7924 | 72550 | 0.0001 | - |
| 5.7964 | 72600 | 0.0 | - |
| 5.8004 | 72650 | 0.0001 | - |
| 5.8044 | 72700 | 0.0001 | - |
| 5.8084 | 72750 | 0.0001 | - |
| 5.8124 | 72800 | 0.0 | - |
| 5.8164 | 72850 | 0.0001 | - |
| 5.8204 | 72900 | 0.0001 | - |
| 5.8244 | 72950 | 0.025 | - |
| 5.8283 | 73000 | 0.0001 | - |
| 5.8323 | 73050 | 0.0 | - |
| 5.8363 | 73100 | 0.0247 | - |
| 5.8403 | 73150 | 0.0 | - |
| 5.8443 | 73200 | 0.0 | - |
| 5.8483 | 73250 | 0.0001 | - |
| 5.8523 | 73300 | 0.0 | - |
| 5.8563 | 73350 | 0.0 | - |
| 5.8603 | 73400 | 0.0001 | - |
| 5.8643 | 73450 | 0.0001 | - |
| 5.8683 | 73500 | 0.0 | - |
| 5.8723 | 73550 | 0.0248 | - |
| 5.8762 | 73600 | 0.0001 | - |
| 5.8802 | 73650 | 0.0001 | - |
| 5.8842 | 73700 | 0.0001 | - |
| 5.8882 | 73750 | 0.0001 | - |
| 5.8922 | 73800 | 0.0248 | - |
| 5.8962 | 73850 | 0.0001 | - |
| 5.9002 | 73900 | 0.0001 | - |
| 5.9042 | 73950 | 0.0001 | - |
| 5.9082 | 74000 | 0.0247 | - |
| 5.9122 | 74050 | 0.0246 | - |
| 5.9162 | 74100 | 0.0001 | - |
| 5.9202 | 74150 | 0.0001 | - |
| 5.9242 | 74200 | 0.0 | - |
| 5.9281 | 74250 | 0.0001 | - |
| 5.9321 | 74300 | 0.0 | - |
| 5.9361 | 74350 | 0.0 | - |
| 5.9401 | 74400 | 0.0 | - |
| 5.9441 | 74450 | 0.0001 | - |
| 5.9481 | 74500 | 0.0249 | - |
| 5.9521 | 74550 | 0.0 | - |
| 5.9561 | 74600 | 0.0002 | - |
| 5.9601 | 74650 | 0.0001 | - |
| 5.9641 | 74700 | 0.0001 | - |
| 5.9681 | 74750 | 0.0 | - |
| 5.9721 | 74800 | 0.0 | - |
| 5.9760 | 74850 | 0.0246 | - |
| 5.9800 | 74900 | 0.0001 | - |
| 5.9840 | 74950 | 0.0 | - |
| 5.9880 | 75000 | 0.0001 | - |
| 5.9920 | 75050 | 0.0 | - |
| 5.9960 | 75100 | 0.0001 | - |
| 6.0 | 75150 | 0.0001 | 0.0594 |
| 6.0040 | 75200 | 0.0001 | - |
| 6.0080 | 75250 | 0.0497 | - |
| 6.0120 | 75300 | 0.0248 | - |
| 6.0160 | 75350 | 0.0495 | - |
| 6.0200 | 75400 | 0.0 | - |
| 6.0240 | 75450 | 0.0001 | - |
| 6.0279 | 75500 | 0.0001 | - |
| 6.0319 | 75550 | 0.0001 | - |
| 6.0359 | 75600 | 0.0252 | - |
| 6.0399 | 75650 | 0.0001 | - |
| 6.0439 | 75700 | 0.0001 | - |
| 6.0479 | 75750 | 0.0001 | - |
| 6.0519 | 75800 | 0.0741 | - |
| 6.0559 | 75850 | 0.0 | - |
| 6.0599 | 75900 | 0.0 | - |
| 6.0639 | 75950 | 0.0001 | - |
| 6.0679 | 76000 | 0.0001 | - |
| 6.0719 | 76050 | 0.0 | - |
| 6.0758 | 76100 | 0.0001 | - |
| 6.0798 | 76150 | 0.0001 | - |
| 6.0838 | 76200 | 0.0247 | - |
| 6.0878 | 76250 | 0.0001 | - |
| 6.0918 | 76300 | 0.0001 | - |
| 6.0958 | 76350 | 0.0244 | - |
| 6.0998 | 76400 | 0.0 | - |
| 6.1038 | 76450 | 0.0001 | - |
| 6.1078 | 76500 | 0.0001 | - |
| 6.1118 | 76550 | 0.0001 | - |
| 6.1158 | 76600 | 0.0 | - |
| 6.1198 | 76650 | 0.0001 | - |
| 6.1238 | 76700 | 0.0 | - |
| 6.1277 | 76750 | 0.0245 | - |
| 6.1317 | 76800 | 0.0001 | - |
| 6.1357 | 76850 | 0.0001 | - |
| 6.1397 | 76900 | 0.0001 | - |
| 6.1437 | 76950 | 0.0001 | - |
| 6.1477 | 77000 | 0.0 | - |
| 6.1517 | 77050 | 0.0001 | - |
| 6.1557 | 77100 | 0.0001 | - |
| 6.1597 | 77150 | 0.0001 | - |
| 6.1637 | 77200 | 0.0001 | - |
| 6.1677 | 77250 | 0.0 | - |
| 6.1717 | 77300 | 0.0001 | - |
| 6.1756 | 77350 | 0.0001 | - |
| 6.1796 | 77400 | 0.0245 | - |
| 6.1836 | 77450 | 0.0 | - |
| 6.1876 | 77500 | 0.0496 | - |
| 6.1916 | 77550 | 0.0246 | - |
| 6.1956 | 77600 | 0.0001 | - |
| 6.1996 | 77650 | 0.025 | - |
| 6.2036 | 77700 | 0.0001 | - |
| 6.2076 | 77750 | 0.0 | - |
| 6.2116 | 77800 | 0.0001 | - |
| 6.2156 | 77850 | 0.0001 | - |
| 6.2196 | 77900 | 0.0248 | - |
| 6.2236 | 77950 | 0.0247 | - |
| 6.2275 | 78000 | 0.0002 | - |
| 6.2315 | 78050 | 0.0001 | - |
| 6.2355 | 78100 | 0.0246 | - |
| 6.2395 | 78150 | 0.0001 | - |
| 6.2435 | 78200 | 0.0001 | - |
| 6.2475 | 78250 | 0.0001 | - |
| 6.2515 | 78300 | 0.0249 | - |
| 6.2555 | 78350 | 0.0001 | - |
| 6.2595 | 78400 | 0.0251 | - |
| 6.2635 | 78450 | 0.0 | - |
| 6.2675 | 78500 | 0.0 | - |
| 6.2715 | 78550 | 0.0001 | - |
| 6.2754 | 78600 | 0.05 | - |
| 6.2794 | 78650 | 0.0001 | - |
| 6.2834 | 78700 | 0.0001 | - |
| 6.2874 | 78750 | 0.0 | - |
| 6.2914 | 78800 | 0.0001 | - |
| 6.2954 | 78850 | 0.0001 | - |
| 6.2994 | 78900 | 0.0 | - |
| 6.3034 | 78950 | 0.0246 | - |
| 6.3074 | 79000 | 0.0245 | - |
| 6.3114 | 79050 | 0.0001 | - |
| 6.3154 | 79100 | 0.0 | - |
| 6.3194 | 79150 | 0.0 | - |
| 6.3234 | 79200 | 0.0002 | - |
| 6.3273 | 79250 | 0.0001 | - |
| 6.3313 | 79300 | 0.0001 | - |
| 6.3353 | 79350 | 0.0 | - |
| 6.3393 | 79400 | 0.0001 | - |
| 6.3433 | 79450 | 0.0249 | - |
| 6.3473 | 79500 | 0.0001 | - |
| 6.3513 | 79550 | 0.0001 | - |
| 6.3553 | 79600 | 0.0001 | - |
| 6.3593 | 79650 | 0.0001 | - |
| 6.3633 | 79700 | 0.0 | - |
| 6.3673 | 79750 | 0.0247 | - |
| 6.3713 | 79800 | 0.0243 | - |
| 6.3752 | 79850 | 0.0 | - |
| 6.3792 | 79900 | 0.0001 | - |
| 6.3832 | 79950 | 0.0001 | - |
| 6.3872 | 80000 | 0.0001 | - |
| 6.3912 | 80050 | 0.0 | - |
| 6.3952 | 80100 | 0.0001 | - |
| 6.3992 | 80150 | 0.0 | - |
| 6.4032 | 80200 | 0.0249 | - |
| 6.4072 | 80250 | 0.0247 | - |
| 6.4112 | 80300 | 0.0 | - |
| 6.4152 | 80350 | 0.0248 | - |
| 6.4192 | 80400 | 0.0001 | - |
| 6.4232 | 80450 | 0.0247 | - |
| 6.4271 | 80500 | 0.0 | - |
| 6.4311 | 80550 | 0.0 | - |
| 6.4351 | 80600 | 0.0247 | - |
| 6.4391 | 80650 | 0.0246 | - |
| 6.4431 | 80700 | 0.0001 | - |
| 6.4471 | 80750 | 0.0242 | - |
| 6.4511 | 80800 | 0.0 | - |
| 6.4551 | 80850 | 0.0001 | - |
| 6.4591 | 80900 | 0.0001 | - |
| 6.4631 | 80950 | 0.0243 | - |
| 6.4671 | 81000 | 0.0001 | - |
| 6.4711 | 81050 | 0.0001 | - |
| 6.4750 | 81100 | 0.0491 | - |
| 6.4790 | 81150 | 0.0 | - |
| 6.4830 | 81200 | 0.0001 | - |
| 6.4870 | 81250 | 0.0001 | - |
| 6.4910 | 81300 | 0.0247 | - |
| 6.4950 | 81350 | 0.0 | - |
| 6.4990 | 81400 | 0.0001 | - |
| 6.5030 | 81450 | 0.0001 | - |
| 6.5070 | 81500 | 0.0001 | - |
| 6.5110 | 81550 | 0.0001 | - |
| 6.5150 | 81600 | 0.0001 | - |
| 6.5190 | 81650 | 0.0246 | - |
| 6.5230 | 81700 | 0.0246 | - |
| 6.5269 | 81750 | 0.0001 | - |
| 6.5309 | 81800 | 0.0001 | - |
| 6.5349 | 81850 | 0.0 | - |
| 6.5389 | 81900 | 0.0 | - |
| 6.5429 | 81950 | 0.0247 | - |
| 6.5469 | 82000 | 0.0248 | - |
| 6.5509 | 82050 | 0.0001 | - |
| 6.5549 | 82100 | 0.0 | - |
| 6.5589 | 82150 | 0.0001 | - |
| 6.5629 | 82200 | 0.0001 | - |
| 6.5669 | 82250 | 0.0001 | - |
| 6.5709 | 82300 | 0.0 | - |
| 6.5749 | 82350 | 0.0001 | - |
| 6.5788 | 82400 | 0.0001 | - |
| 6.5828 | 82450 | 0.0 | - |
| 6.5868 | 82500 | 0.0001 | - |
| 6.5908 | 82550 | 0.0001 | - |
| 6.5948 | 82600 | 0.0001 | - |
| 6.5988 | 82650 | 0.0001 | - |
| 6.6028 | 82700 | 0.0001 | - |
| 6.6068 | 82750 | 0.0001 | - |
| 6.6108 | 82800 | 0.0 | - |
| 6.6148 | 82850 | 0.0 | - |
| 6.6188 | 82900 | 0.0001 | - |
| 6.6228 | 82950 | 0.0244 | - |
| 6.6267 | 83000 | 0.0001 | - |
| 6.6307 | 83050 | 0.0247 | - |
| 6.6347 | 83100 | 0.0 | - |
| 6.6387 | 83150 | 0.0001 | - |
| 6.6427 | 83200 | 0.0246 | - |
| 6.6467 | 83250 | 0.0001 | - |
| 6.6507 | 83300 | 0.0001 | - |
| 6.6547 | 83350 | 0.0 | - |
| 6.6587 | 83400 | 0.0 | - |
| 6.6627 | 83450 | 0.0 | - |
| 6.6667 | 83500 | 0.0 | - |
| 6.6707 | 83550 | 0.0 | - |
| 6.6747 | 83600 | 0.0001 | - |
| 6.6786 | 83650 | 0.0001 | - |
| 6.6826 | 83700 | 0.0 | - |
| 6.6866 | 83750 | 0.0001 | - |
| 6.6906 | 83800 | 0.0 | - |
| 6.6946 | 83850 | 0.0001 | - |
| 6.6986 | 83900 | 0.0001 | - |
| 6.7026 | 83950 | 0.0001 | - |
| 6.7066 | 84000 | 0.025 | - |
| 6.7106 | 84050 | 0.0001 | - |
| 6.7146 | 84100 | 0.0 | - |
| 6.7186 | 84150 | 0.0 | - |
| 6.7226 | 84200 | 0.0 | - |
| 6.7265 | 84250 | 0.0001 | - |
| 6.7305 | 84300 | 0.0246 | - |
| 6.7345 | 84350 | 0.0001 | - |
| 6.7385 | 84400 | 0.0251 | - |
| 6.7425 | 84450 | 0.0 | - |
| 6.7465 | 84500 | 0.0 | - |
| 6.7505 | 84550 | 0.0001 | - |
| 6.7545 | 84600 | 0.0 | - |
| 6.7585 | 84650 | 0.0248 | - |
| 6.7625 | 84700 | 0.0 | - |
| 6.7665 | 84750 | 0.0001 | - |
| 6.7705 | 84800 | 0.0 | - |
| 6.7745 | 84850 | 0.0247 | - |
| 6.7784 | 84900 | 0.0 | - |
| 6.7824 | 84950 | 0.0001 | - |
| 6.7864 | 85000 | 0.0 | - |
| 6.7904 | 85050 | 0.0 | - |
| 6.7944 | 85100 | 0.0 | - |
| 6.7984 | 85150 | 0.0002 | - |
| 6.8024 | 85200 | 0.0 | - |
| 6.8064 | 85250 | 0.0001 | - |
| 6.8104 | 85300 | 0.0001 | - |
| 6.8144 | 85350 | 0.0 | - |
| 6.8184 | 85400 | 0.0001 | - |
| 6.8224 | 85450 | 0.0001 | - |
| 6.8263 | 85500 | 0.0 | - |
| 6.8303 | 85550 | 0.0001 | - |
| 6.8343 | 85600 | 0.0 | - |
| 6.8383 | 85650 | 0.0 | - |
| 6.8423 | 85700 | 0.0 | - |
| 6.8463 | 85750 | 0.0248 | - |
| 6.8503 | 85800 | 0.0 | - |
| 6.8543 | 85850 | 0.0001 | - |
| 6.8583 | 85900 | 0.0 | - |
| 6.8623 | 85950 | 0.0 | - |
| 6.8663 | 86000 | 0.0001 | - |
| 6.8703 | 86050 | 0.0 | - |
| 6.8743 | 86100 | 0.0001 | - |
| 6.8782 | 86150 | 0.0001 | - |
| 6.8822 | 86200 | 0.0 | - |
| 6.8862 | 86250 | 0.0 | - |
| 6.8902 | 86300 | 0.0 | - |
| 6.8942 | 86350 | 0.0 | - |
| 6.8982 | 86400 | 0.0 | - |
| 6.9022 | 86450 | 0.0001 | - |
| 6.9062 | 86500 | 0.0 | - |
| 6.9102 | 86550 | 0.0 | - |
| 6.9142 | 86600 | 0.0 | - |
| 6.9182 | 86650 | 0.0 | - |
| 6.9222 | 86700 | 0.0 | - |
| 6.9261 | 86750 | 0.0001 | - |
| 6.9301 | 86800 | 0.0246 | - |
| 6.9341 | 86850 | 0.025 | - |
| 6.9381 | 86900 | 0.0001 | - |
| 6.9421 | 86950 | 0.0 | - |
| 6.9461 | 87000 | 0.0 | - |
| 6.9501 | 87050 | 0.0 | - |
| 6.9541 | 87100 | 0.0001 | - |
| 6.9581 | 87150 | 0.0001 | - |
| 6.9621 | 87200 | 0.0 | - |
| 6.9661 | 87250 | 0.0 | - |
| 6.9701 | 87300 | 0.0001 | - |
| 6.9741 | 87350 | 0.0081 | - |
| 6.9780 | 87400 | 0.0 | - |
| 6.9820 | 87450 | 0.0469 | - |
| 6.9860 | 87500 | 0.0 | - |
| 6.9900 | 87550 | 0.0 | - |
| 6.9940 | 87600 | 0.0 | - |
| 6.9980 | 87650 | 0.0 | - |
| **7.0** | **87675** | **-** | **0.039** |
| 7.0020 | 87700 | 0.0248 | - |
| 7.0060 | 87750 | 0.0246 | - |
| 7.0100 | 87800 | 0.0 | - |
| 7.0140 | 87850 | 0.0001 | - |
| 7.0180 | 87900 | 0.0001 | - |
| 7.0220 | 87950 | 0.0 | - |
| 7.0259 | 88000 | 0.0 | - |
| 7.0299 | 88050 | 0.0001 | - |
| 7.0339 | 88100 | 0.0001 | - |
| 7.0379 | 88150 | 0.0 | - |
| 7.0419 | 88200 | 0.0034 | - |
| 7.0459 | 88250 | 0.0001 | - |
| 7.0499 | 88300 | 0.004 | - |
| 7.0539 | 88350 | 0.0 | - |
| 7.0579 | 88400 | 0.0 | - |
| 7.0619 | 88450 | 0.0001 | - |
| 7.0659 | 88500 | 0.0249 | - |
| 7.0699 | 88550 | 0.0 | - |
| 7.0739 | 88600 | 0.0 | - |
| 7.0778 | 88650 | 0.0001 | - |
| 7.0818 | 88700 | 0.0 | - |
| 7.0858 | 88750 | 0.0 | - |
| 7.0898 | 88800 | 0.0 | - |
| 7.0938 | 88850 | 0.0 | - |
| 7.0978 | 88900 | 0.0 | - |
| 7.1018 | 88950 | 0.0 | - |
| 7.1058 | 89000 | 0.0 | - |
| 7.1098 | 89050 | 0.0 | - |
| 7.1138 | 89100 | 0.0 | - |
| 7.1178 | 89150 | 0.0 | - |
| 7.1218 | 89200 | 0.0001 | - |
| 7.1257 | 89250 | 0.0024 | - |
| 7.1297 | 89300 | 0.0 | - |
| 7.1337 | 89350 | 0.0001 | - |
| 7.1377 | 89400 | 0.0 | - |
| 7.1417 | 89450 | 0.0 | - |
| 7.1457 | 89500 | 0.0001 | - |
| 7.1497 | 89550 | 0.0 | - |
| 7.1537 | 89600 | 0.0 | - |
| 7.1577 | 89650 | 0.0 | - |
| 7.1617 | 89700 | 0.0001 | - |
| 7.1657 | 89750 | 0.0001 | - |
| 7.1697 | 89800 | 0.0 | - |
| 7.1737 | 89850 | 0.0 | - |
| 7.1776 | 89900 | 0.0135 | - |
| 7.1816 | 89950 | 0.0001 | - |
| 7.1856 | 90000 | 0.0043 | - |
| 7.1896 | 90050 | 0.0072 | - |
| 7.1936 | 90100 | 0.0 | - |
| 7.1976 | 90150 | 0.0 | - |
| 7.2016 | 90200 | 0.0 | - |
| 7.2056 | 90250 | 0.0247 | - |
| 7.2096 | 90300 | 0.0 | - |
| 7.2136 | 90350 | 0.0 | - |
| 7.2176 | 90400 | 0.0 | - |
| 7.2216 | 90450 | 0.0 | - |
| 7.2255 | 90500 | 0.0001 | - |
| 7.2295 | 90550 | 0.0 | - |
| 7.2335 | 90600 | 0.0363 | - |
| 7.2375 | 90650 | 0.0 | - |
| 7.2415 | 90700 | 0.0 | - |
| 7.2455 | 90750 | 0.0 | - |
| 7.2495 | 90800 | 0.0 | - |
| 7.2535 | 90850 | 0.0 | - |
| 7.2575 | 90900 | 0.0 | - |
| 7.2615 | 90950 | 0.0138 | - |
| 7.2655 | 91000 | 0.0 | - |
| 7.2695 | 91050 | 0.0 | - |
| 7.2735 | 91100 | 0.0 | - |
| 7.2774 | 91150 | 0.0 | - |
| 7.2814 | 91200 | 0.0252 | - |
| 7.2854 | 91250 | 0.0 | - |
| 7.2894 | 91300 | 0.0 | - |
| 7.2934 | 91350 | 0.0 | - |
| 7.2974 | 91400 | 0.0105 | - |
| 7.3014 | 91450 | 0.0244 | - |
| 7.3054 | 91500 | 0.0 | - |
| 7.3094 | 91550 | 0.0 | - |
| 7.3134 | 91600 | 0.0247 | - |
| 7.3174 | 91650 | 0.0 | - |
| 7.3214 | 91700 | 0.0001 | - |
| 7.3253 | 91750 | 0.0 | - |
| 7.3293 | 91800 | 0.0 | - |
| 7.3333 | 91850 | 0.0189 | - |
| 7.3373 | 91900 | 0.0 | - |
| 7.3413 | 91950 | 0.0246 | - |
| 7.3453 | 92000 | 0.0 | - |
| 7.3493 | 92050 | 0.0 | - |
| 7.3533 | 92100 | 0.0 | - |
| 7.3573 | 92150 | 0.0 | - |
| 7.3613 | 92200 | 0.0 | - |
| 7.3653 | 92250 | 0.0247 | - |
| 7.3693 | 92300 | 0.0122 | - |
| 7.3733 | 92350 | 0.0 | - |
| 7.3772 | 92400 | 0.0 | - |
| 7.3812 | 92450 | 0.0022 | - |
| 7.3852 | 92500 | 0.0 | - |
| 7.3892 | 92550 | 0.0001 | - |
| 7.3932 | 92600 | 0.0 | - |
| 7.3972 | 92650 | 0.0 | - |
| 7.4012 | 92700 | 0.0 | - |
| 7.4052 | 92750 | 0.0032 | - |
| 7.4092 | 92800 | 0.0001 | - |
| 7.4132 | 92850 | 0.0037 | - |
| 7.4172 | 92900 | 0.0001 | - |
| 7.4212 | 92950 | 0.0028 | - |
| 7.4251 | 93000 | 0.0001 | - |
| 7.4291 | 93050 | 0.0 | - |
| 7.4331 | 93100 | 0.0039 | - |
| 7.4371 | 93150 | 0.0036 | - |
| 7.4411 | 93200 | 0.0 | - |
| 7.4451 | 93250 | 0.0 | - |
| 7.4491 | 93300 | 0.0 | - |
| 7.4531 | 93350 | 0.0 | - |
| 7.4571 | 93400 | 0.0001 | - |
| 7.4611 | 93450 | 0.0091 | - |
| 7.4651 | 93500 | 0.0 | - |
| 7.4691 | 93550 | 0.0 | - |
| 7.4731 | 93600 | 0.0275 | - |
| 7.4770 | 93650 | 0.0 | - |
| 7.4810 | 93700 | 0.0 | - |
| 7.4850 | 93750 | 0.0035 | - |
| 7.4890 | 93800 | 0.0246 | - |
| 7.4930 | 93850 | 0.0025 | - |
| 7.4970 | 93900 | 0.0 | - |
| 7.5010 | 93950 | 0.0 | - |
| 7.5050 | 94000 | 0.0 | - |
| 7.5090 | 94050 | 0.0 | - |
| 7.5130 | 94100 | 0.0 | - |
| 7.5170 | 94150 | 0.0 | - |
| 7.5210 | 94200 | 0.0023 | - |
| 7.5250 | 94250 | 0.0 | - |
| 7.5289 | 94300 | 0.0 | - |
| 7.5329 | 94350 | 0.0 | - |
| 7.5369 | 94400 | 0.0 | - |
| 7.5409 | 94450 | 0.0027 | - |
| 7.5449 | 94500 | 0.0028 | - |
| 7.5489 | 94550 | 0.0247 | - |
| 7.5529 | 94600 | 0.0 | - |
| 7.5569 | 94650 | 0.0 | - |
| 7.5609 | 94700 | 0.0 | - |
| 7.5649 | 94750 | 0.0036 | - |
| 7.5689 | 94800 | 0.0 | - |
| 7.5729 | 94850 | 0.0 | - |
| 7.5768 | 94900 | 0.0 | - |
| 7.5808 | 94950 | 0.0 | - |
| 7.5848 | 95000 | 0.0 | - |
| 7.5888 | 95050 | 0.0 | - |
| 7.5928 | 95100 | 0.0001 | - |
| 7.5968 | 95150 | 0.0 | - |
| 7.6008 | 95200 | 0.0 | - |
| 7.6048 | 95250 | 0.0001 | - |
| 7.6088 | 95300 | 0.0 | - |
| 7.6128 | 95350 | 0.0 | - |
| 7.6168 | 95400 | 0.0028 | - |
| 7.6208 | 95450 | 0.0119 | - |
| 7.6248 | 95500 | 0.0028 | - |
| 7.6287 | 95550 | 0.0 | - |
| 7.6327 | 95600 | 0.0001 | - |
| 7.6367 | 95650 | 0.0 | - |
| 7.6407 | 95700 | 0.0318 | - |
| 7.6447 | 95750 | 0.0037 | - |
| 7.6487 | 95800 | 0.0035 | - |
| 7.6527 | 95850 | 0.0089 | - |
| 7.6567 | 95900 | 0.0 | - |
| 7.6607 | 95950 | 0.006 | - |
| 7.6647 | 96000 | 0.0 | - |
| 7.6687 | 96050 | 0.0 | - |
| 7.6727 | 96100 | 0.0 | - |
| 7.6766 | 96150 | 0.0 | - |
| 7.6806 | 96200 | 0.0 | - |
| 7.6846 | 96250 | 0.0 | - |
| 7.6886 | 96300 | 0.0105 | - |
| 7.6926 | 96350 | 0.0 | - |
| 7.6966 | 96400 | 0.0 | - |
| 7.7006 | 96450 | 0.0 | - |
| 7.7046 | 96500 | 0.0 | - |
| 7.7086 | 96550 | 0.0 | - |
| 7.7126 | 96600 | 0.0 | - |
| 7.7166 | 96650 | 0.0024 | - |
| 7.7206 | 96700 | 0.0001 | - |
| 7.7246 | 96750 | 0.0 | - |
| 7.7285 | 96800 | 0.0123 | - |
| 7.7325 | 96850 | 0.0 | - |
| 7.7365 | 96900 | 0.0031 | - |
| 7.7405 | 96950 | 0.0 | - |
| 7.7445 | 97000 | 0.0025 | - |
| 7.7485 | 97050 | 0.0 | - |
| 7.7525 | 97100 | 0.0 | - |
| 7.7565 | 97150 | 0.0 | - |
| 7.7605 | 97200 | 0.0022 | - |
| 7.7645 | 97250 | 0.0251 | - |
| 7.7685 | 97300 | 0.002 | - |
| 7.7725 | 97350 | 0.0118 | - |
| 7.7764 | 97400 | 0.0019 | - |
| 7.7804 | 97450 | 0.0001 | - |
| 7.7844 | 97500 | 0.0123 | - |
| 7.7884 | 97550 | 0.0 | - |
| 7.7924 | 97600 | 0.0 | - |
| 7.7964 | 97650 | 0.0 | - |
| 7.8004 | 97700 | 0.0097 | - |
| 7.8044 | 97750 | 0.0 | - |
| 7.8084 | 97800 | 0.0 | - |
| 7.8124 | 97850 | 0.0 | - |
| 7.8164 | 97900 | 0.0001 | - |
| 7.8204 | 97950 | 0.0001 | - |
| 7.8244 | 98000 | 0.0251 | - |
| 7.8283 | 98050 | 0.0 | - |
| 7.8323 | 98100 | 0.009 | - |
| 7.8363 | 98150 | 0.0246 | - |
| 7.8403 | 98200 | 0.0 | - |
| 7.8443 | 98250 | 0.0 | - |
| 7.8483 | 98300 | 0.0 | - |
| 7.8523 | 98350 | 0.0 | - |
| 7.8563 | 98400 | 0.0 | - |
| 7.8603 | 98450 | 0.0001 | - |
| 7.8643 | 98500 | 0.0 | - |
| 7.8683 | 98550 | 0.0018 | - |
| 7.8723 | 98600 | 0.0 | - |
| 7.8762 | 98650 | 0.0001 | - |
| 7.8802 | 98700 | 0.0 | - |
| 7.8842 | 98750 | 0.0 | - |
| 7.8882 | 98800 | 0.0024 | - |
| 7.8922 | 98850 | 0.0 | - |
| 7.8962 | 98900 | 0.0 | - |
| 7.9002 | 98950 | 0.0 | - |
| 7.9042 | 99000 | 0.0027 | - |
| 7.9082 | 99050 | 0.0027 | - |
| 7.9122 | 99100 | 0.0 | - |
| 7.9162 | 99150 | 0.0 | - |
| 7.9202 | 99200 | 0.0 | - |
| 7.9242 | 99250 | 0.0 | - |
| 7.9281 | 99300 | 0.0138 | - |
| 7.9321 | 99350 | 0.0 | - |
| 7.9361 | 99400 | 0.0 | - |
| 7.9401 | 99450 | 0.0 | - |
| 7.9441 | 99500 | 0.0001 | - |
| 7.9481 | 99550 | 0.0019 | - |
| 7.9521 | 99600 | 0.0 | - |
| 7.9561 | 99650 | 0.0 | - |
| 7.9601 | 99700 | 0.0001 | - |
| 7.9641 | 99750 | 0.0 | - |
| 7.9681 | 99800 | 0.0 | - |
| 7.9721 | 99850 | 0.0016 | - |
| 7.9760 | 99900 | 0.0001 | - |
| 7.9800 | 99950 | 0.0265 | - |
| 7.9840 | 100000 | 0.0 | - |
| 7.9880 | 100050 | 0.0 | - |
| 7.9920 | 100100 | 0.0 | - |
| 7.9960 | 100150 | 0.0127 | - |
| 8.0 | 100200 | 0.0001 | 0.0405 |
| 8.0040 | 100250 | 0.0247 | - |
| 8.0080 | 100300 | 0.025 | - |
| 8.0120 | 100350 | 0.0 | - |
| 8.0160 | 100400 | 0.0072 | - |
| 8.0200 | 100450 | 0.0 | - |
| 8.0240 | 100500 | 0.0 | - |
| 8.0279 | 100550 | 0.0 | - |
| 8.0319 | 100600 | 0.0 | - |
| 8.0359 | 100650 | 0.0251 | - |
| 8.0399 | 100700 | 0.0 | - |
| 8.0439 | 100750 | 0.0 | - |
| 8.0479 | 100800 | 0.0042 | - |
| 8.0519 | 100850 | 0.0036 | - |
| 8.0559 | 100900 | 0.0 | - |
| 8.0599 | 100950 | 0.0 | - |
| 8.0639 | 101000 | 0.0 | - |
| 8.0679 | 101050 | 0.0001 | - |
| 8.0719 | 101100 | 0.0 | - |
| 8.0758 | 101150 | 0.0 | - |
| 8.0798 | 101200 | 0.0116 | - |
| 8.0838 | 101250 | 0.0027 | - |
| 8.0878 | 101300 | 0.0 | - |
| 8.0918 | 101350 | 0.0 | - |
| 8.0958 | 101400 | 0.0032 | - |
| 8.0998 | 101450 | 0.0 | - |
| 8.1038 | 101500 | 0.0 | - |
| 8.1078 | 101550 | 0.0 | - |
| 8.1118 | 101600 | 0.0097 | - |
| 8.1158 | 101650 | 0.0 | - |
| 8.1198 | 101700 | 0.0105 | - |
| 8.1238 | 101750 | 0.0 | - |
| 8.1277 | 101800 | 0.0026 | - |
| 8.1317 | 101850 | 0.0 | - |
| 8.1357 | 101900 | 0.0 | - |
| 8.1397 | 101950 | 0.0 | - |
| 8.1437 | 102000 | 0.0 | - |
| 8.1477 | 102050 | 0.0 | - |
| 8.1517 | 102100 | 0.0 | - |
| 8.1557 | 102150 | 0.0 | - |
| 8.1597 | 102200 | 0.0 | - |
| 8.1637 | 102250 | 0.0 | - |
| 8.1677 | 102300 | 0.0 | - |
| 8.1717 | 102350 | 0.0 | - |
| 8.1756 | 102400 | 0.0028 | - |
| 8.1796 | 102450 | 0.0 | - |
| 8.1836 | 102500 | 0.0037 | - |
| 8.1876 | 102550 | 0.0065 | - |
| 8.1916 | 102600 | 0.0 | - |
| 8.1956 | 102650 | 0.0001 | - |
| 8.1996 | 102700 | 0.0251 | - |
| 8.2036 | 102750 | 0.0 | - |
| 8.2076 | 102800 | 0.0 | - |
| 8.2116 | 102850 | 0.0 | - |
| 8.2156 | 102900 | 0.0 | - |
| 8.2196 | 102950 | 0.0023 | - |
| 8.2236 | 103000 | 0.0023 | - |
| 8.2275 | 103050 | 0.0 | - |
| 8.2315 | 103100 | 0.0246 | - |
| 8.2355 | 103150 | 0.0 | - |
| 8.2395 | 103200 | 0.0 | - |
| 8.2435 | 103250 | 0.0 | - |
| 8.2475 | 103300 | 0.0 | - |
| 8.2515 | 103350 | 0.0253 | - |
| 8.2555 | 103400 | 0.0 | - |
| 8.2595 | 103450 | 0.0148 | - |
| 8.2635 | 103500 | 0.0 | - |
| 8.2675 | 103550 | 0.0 | - |
| 8.2715 | 103600 | 0.0001 | - |
| 8.2754 | 103650 | 0.0271 | - |
| 8.2794 | 103700 | 0.0 | - |
| 8.2834 | 103750 | 0.0 | - |
| 8.2874 | 103800 | 0.0 | - |
| 8.2914 | 103850 | 0.0 | - |
| 8.2954 | 103900 | 0.0108 | - |
| 8.2994 | 103950 | 0.0245 | - |
| 8.3034 | 104000 | 0.0 | - |
| 8.3074 | 104050 | 0.0248 | - |
| 8.3114 | 104100 | 0.0 | - |
| 8.3154 | 104150 | 0.0 | - |
| 8.3194 | 104200 | 0.0 | - |
| 8.3234 | 104250 | 0.0001 | - |
| 8.3273 | 104300 | 0.0 | - |
| 8.3313 | 104350 | 0.0098 | - |
| 8.3353 | 104400 | 0.0 | - |
| 8.3393 | 104450 | 0.0247 | - |
| 8.3433 | 104500 | 0.0001 | - |
| 8.3473 | 104550 | 0.0 | - |
| 8.3513 | 104600 | 0.0 | - |
| 8.3553 | 104650 | 0.0001 | - |
| 8.3593 | 104700 | 0.0 | - |
| 8.3633 | 104750 | 0.0247 | - |
| 8.3673 | 104800 | 0.0 | - |
| 8.3713 | 104850 | 0.0001 | - |
| 8.3752 | 104900 | 0.0001 | - |
| 8.3792 | 104950 | 0.0 | - |
| 8.3832 | 105000 | 0.0 | - |
| 8.3872 | 105050 | 0.0 | - |
| 8.3912 | 105100 | 0.0 | - |
| 8.3952 | 105150 | 0.0 | - |
| 8.3992 | 105200 | 0.0001 | - |
| 8.4032 | 105250 | 0.0 | - |
| 8.4072 | 105300 | 0.0001 | - |
| 8.4112 | 105350 | 0.0001 | - |
| 8.4152 | 105400 | 0.0 | - |
| 8.4192 | 105450 | 0.0 | - |
| 8.4232 | 105500 | 0.025 | - |
| 8.4271 | 105550 | 0.0 | - |
| 8.4311 | 105600 | 0.0 | - |
| 8.4351 | 105650 | 0.0 | - |
| 8.4391 | 105700 | 0.0 | - |
| 8.4431 | 105750 | 0.0001 | - |
| 8.4471 | 105800 | 0.0 | - |
| 8.4511 | 105850 | 0.0 | - |
| 8.4551 | 105900 | 0.0001 | - |
| 8.4591 | 105950 | 0.0246 | - |
| 8.4631 | 106000 | 0.0 | - |
| 8.4671 | 106050 | 0.0 | - |
| 8.4711 | 106100 | 0.0246 | - |
| 8.4750 | 106150 | 0.0001 | - |
| 8.4790 | 106200 | 0.0 | - |
| 8.4830 | 106250 | 0.0 | - |
| 8.4870 | 106300 | 0.0246 | - |
| 8.4910 | 106350 | 0.0 | - |
| 8.4950 | 106400 | 0.0 | - |
| 8.4990 | 106450 | 0.0001 | - |
| 8.5030 | 106500 | 0.0001 | - |
| 8.5070 | 106550 | 0.0 | - |
| 8.5110 | 106600 | 0.0 | - |
| 8.5150 | 106650 | 0.0001 | - |
| 8.5190 | 106700 | 0.0 | - |
| 8.5230 | 106750 | 0.0 | - |
| 8.5269 | 106800 | 0.0 | - |
| 8.5309 | 106850 | 0.0001 | - |
| 8.5349 | 106900 | 0.0 | - |
| 8.5389 | 106950 | 0.0 | - |
| 8.5429 | 107000 | 0.0001 | - |
| 8.5469 | 107050 | 0.0 | - |
| 8.5509 | 107100 | 0.0 | - |
| 8.5549 | 107150 | 0.0 | - |
| 8.5589 | 107200 | 0.0 | - |
| 8.5629 | 107250 | 0.0001 | - |
| 8.5669 | 107300 | 0.0 | - |
| 8.5709 | 107350 | 0.0 | - |
| 8.5749 | 107400 | 0.0001 | - |
| 8.5788 | 107450 | 0.0251 | - |
| 8.5828 | 107500 | 0.0 | - |
| 8.5868 | 107550 | 0.0 | - |
| 8.5908 | 107600 | 0.0001 | - |
| 8.5948 | 107650 | 0.0 | - |
| 8.5988 | 107700 | 0.0 | - |
| 8.6028 | 107750 | 0.0 | - |
| 8.6068 | 107800 | 0.0001 | - |
| 8.6108 | 107850 | 0.0 | - |
| 8.6148 | 107900 | 0.0 | - |
| 8.6188 | 107950 | 0.0245 | - |
| 8.6228 | 108000 | 0.0 | - |
| 8.6267 | 108050 | 0.0 | - |
| 8.6307 | 108100 | 0.0249 | - |
| 8.6347 | 108150 | 0.0 | - |
| 8.6387 | 108200 | 0.0246 | - |
| 8.6427 | 108250 | 0.0 | - |
| 8.6467 | 108300 | 0.0001 | - |
| 8.6507 | 108350 | 0.0001 | - |
| 8.6547 | 108400 | 0.0001 | - |
| 8.6587 | 108450 | 0.0 | - |
| 8.6627 | 108500 | 0.0 | - |
| 8.6667 | 108550 | 0.0 | - |
| 8.6707 | 108600 | 0.0 | - |
| 8.6747 | 108650 | 0.0 | - |
| 8.6786 | 108700 | 0.0 | - |
| 8.6826 | 108750 | 0.0 | - |
| 8.6866 | 108800 | 0.0001 | - |
| 8.6906 | 108850 | 0.0 | - |
| 8.6946 | 108900 | 0.0 | - |
| 8.6986 | 108950 | 0.0 | - |
| 8.7026 | 109000 | 0.0 | - |
| 8.7066 | 109050 | 0.0248 | - |
| 8.7106 | 109100 | 0.0001 | - |
| 8.7146 | 109150 | 0.0 | - |
| 8.7186 | 109200 | 0.0 | - |
| 8.7226 | 109250 | 0.0 | - |
| 8.7265 | 109300 | 0.0246 | - |
| 8.7305 | 109350 | 0.0001 | - |
| 8.7345 | 109400 | 0.0 | - |
| 8.7385 | 109450 | 0.025 | - |
| 8.7425 | 109500 | 0.0 | - |
| 8.7465 | 109550 | 0.0 | - |
| 8.7505 | 109600 | 0.0 | - |
| 8.7545 | 109650 | 0.0 | - |
| 8.7585 | 109700 | 0.025 | - |
| 8.7625 | 109750 | 0.0001 | - |
| 8.7665 | 109800 | 0.0001 | - |
| 8.7705 | 109850 | 0.0248 | - |
| 8.7745 | 109900 | 0.0001 | - |
| 8.7784 | 109950 | 0.0 | - |
| 8.7824 | 110000 | 0.0 | - |
| 8.7864 | 110050 | 0.0 | - |
| 8.7904 | 110100 | 0.0 | - |
| 8.7944 | 110150 | 0.0 | - |
| 8.7984 | 110200 | 0.0001 | - |
| 8.8024 | 110250 | 0.0 | - |
| 8.8064 | 110300 | 0.0 | - |
| 8.8104 | 110350 | 0.0 | - |
| 8.8144 | 110400 | 0.0 | - |
| 8.8184 | 110450 | 0.0001 | - |
| 8.8224 | 110500 | 0.0001 | - |
| 8.8263 | 110550 | 0.0 | - |
| 8.8303 | 110600 | 0.0001 | - |
| 8.8343 | 110650 | 0.0 | - |
| 8.8383 | 110700 | 0.0 | - |
| 8.8423 | 110750 | 0.0 | - |
| 8.8463 | 110800 | 0.0247 | - |
| 8.8503 | 110850 | 0.0 | - |
| 8.8543 | 110900 | 0.0 | - |
| 8.8583 | 110950 | 0.0 | - |
| 8.8623 | 111000 | 0.0 | - |
| 8.8663 | 111050 | 0.0001 | - |
| 8.8703 | 111100 | 0.0 | - |
| 8.8743 | 111150 | 0.0001 | - |
| 8.8782 | 111200 | 0.0001 | - |
| 8.8822 | 111250 | 0.0 | - |
| 8.8862 | 111300 | 0.0 | - |
| 8.8902 | 111350 | 0.0001 | - |
| 8.8942 | 111400 | 0.0 | - |
| 8.8982 | 111450 | 0.0 | - |
| 8.9022 | 111500 | 0.0 | - |
| 8.9062 | 111550 | 0.0 | - |
| 8.9102 | 111600 | 0.0 | - |
| 8.9142 | 111650 | 0.0 | - |
| 8.9182 | 111700 | 0.0 | - |
| 8.9222 | 111750 | 0.0 | - |
| 8.9261 | 111800 | 0.0247 | - |
| 8.9301 | 111850 | 0.0 | - |
| 8.9341 | 111900 | 0.0248 | - |
| 8.9381 | 111950 | 0.0 | - |
| 8.9421 | 112000 | 0.0 | - |
| 8.9461 | 112050 | 0.0 | - |
| 8.9501 | 112100 | 0.0 | - |
| 8.9541 | 112150 | 0.0 | - |
| 8.9581 | 112200 | 0.0 | - |
| 8.9621 | 112250 | 0.0001 | - |
| 8.9661 | 112300 | 0.0 | - |
| 8.9701 | 112350 | 0.0001 | - |
| 8.9741 | 112400 | 0.0001 | - |
| 8.9780 | 112450 | 0.0247 | - |
| 8.9820 | 112500 | 0.0496 | - |
| 8.9860 | 112550 | 0.0 | - |
| 8.9900 | 112600 | 0.0001 | - |
| 8.9940 | 112650 | 0.0 | - |
| 8.9980 | 112700 | 0.0 | - |
| 9.0 | 112725 | - | 0.0579 |
| 9.0020 | 112750 | 0.0493 | - |
| 9.0060 | 112800 | 0.0 | - |
| 9.0100 | 112850 | 0.0001 | - |
| 9.0140 | 112900 | 0.0001 | - |
| 9.0180 | 112950 | 0.0 | - |
| 9.0220 | 113000 | 0.0 | - |
| 9.0259 | 113050 | 0.0 | - |
| 9.0299 | 113100 | 0.0 | - |
| 9.0339 | 113150 | 0.0001 | - |
| 9.0379 | 113200 | 0.0 | - |
| 9.0419 | 113250 | 0.0 | - |
| 9.0459 | 113300 | 0.0 | - |
| 9.0499 | 113350 | 0.0 | - |
| 9.0539 | 113400 | 0.0 | - |
| 9.0579 | 113450 | 0.0 | - |
| 9.0619 | 113500 | 0.0 | - |
| 9.0659 | 113550 | 0.0246 | - |
| 9.0699 | 113600 | 0.0 | - |
| 9.0739 | 113650 | 0.0 | - |
| 9.0778 | 113700 | 0.0001 | - |
| 9.0818 | 113750 | 0.0001 | - |
| 9.0858 | 113800 | 0.0 | - |
| 9.0898 | 113850 | 0.0001 | - |
| 9.0938 | 113900 | 0.0 | - |
| 9.0978 | 113950 | 0.0 | - |
| 9.1018 | 114000 | 0.0 | - |
| 9.1058 | 114050 | 0.0 | - |
| 9.1098 | 114100 | 0.0 | - |
| 9.1138 | 114150 | 0.0 | - |
| 9.1178 | 114200 | 0.0 | - |
| 9.1218 | 114250 | 0.0 | - |
| 9.1257 | 114300 | 0.0001 | - |
| 9.1297 | 114350 | 0.0 | - |
| 9.1337 | 114400 | 0.0001 | - |
| 9.1377 | 114450 | 0.0 | - |
| 9.1417 | 114500 | 0.0 | - |
| 9.1457 | 114550 | 0.0001 | - |
| 9.1497 | 114600 | 0.0 | - |
| 9.1537 | 114650 | 0.0 | - |
| 9.1577 | 114700 | 0.0 | - |
| 9.1617 | 114750 | 0.0001 | - |
| 9.1657 | 114800 | 0.0 | - |
| 9.1697 | 114850 | 0.0 | - |
| 9.1737 | 114900 | 0.0 | - |
| 9.1776 | 114950 | 0.0247 | - |
| 9.1816 | 115000 | 0.0001 | - |
| 9.1856 | 115050 | 0.0001 | - |
| 9.1896 | 115100 | 0.0001 | - |
| 9.1936 | 115150 | 0.0 | - |
| 9.1976 | 115200 | 0.0 | - |
| 9.2016 | 115250 | 0.0 | - |
| 9.2056 | 115300 | 0.0247 | - |
| 9.2096 | 115350 | 0.0 | - |
| 9.2136 | 115400 | 0.0 | - |
| 9.2176 | 115450 | 0.0 | - |
| 9.2216 | 115500 | 0.0 | - |
| 9.2255 | 115550 | 0.0 | - |
| 9.2295 | 115600 | 0.0245 | - |
| 9.2335 | 115650 | 0.0248 | - |
| 9.2375 | 115700 | 0.0 | - |
| 9.2415 | 115750 | 0.0001 | - |
| 9.2455 | 115800 | 0.0 | - |
| 9.2495 | 115850 | 0.0 | - |
| 9.2535 | 115900 | 0.0 | - |
| 9.2575 | 115950 | 0.0246 | - |
| 9.2615 | 116000 | 0.0 | - |
| 9.2655 | 116050 | 0.0 | - |
| 9.2695 | 116100 | 0.0 | - |
| 9.2735 | 116150 | 0.0 | - |
| 9.2774 | 116200 | 0.0 | - |
| 9.2814 | 116250 | 0.0246 | - |
| 9.2854 | 116300 | 0.0 | - |
| 9.2894 | 116350 | 0.0 | - |
| 9.2934 | 116400 | 0.0247 | - |
| 9.2974 | 116450 | 0.0245 | - |
| 9.3014 | 116500 | 0.0 | - |
| 9.3054 | 116550 | 0.0 | - |
| 9.3094 | 116600 | 0.0 | - |
| 9.3134 | 116650 | 0.0244 | - |
| 9.3174 | 116700 | 0.0001 | - |
| 9.3214 | 116750 | 0.0 | - |
| 9.3253 | 116800 | 0.0001 | - |
| 9.3293 | 116850 | 0.0232 | - |
| 9.3333 | 116900 | 0.0192 | - |
| 9.3373 | 116950 | 0.0246 | - |
| 9.3413 | 117000 | 0.0 | - |
| 9.3453 | 117050 | 0.0005 | - |
| 9.3493 | 117100 | 0.0007 | - |
| 9.3533 | 117150 | 0.0002 | - |
| 9.3573 | 117200 | 0.0001 | - |
| 9.3613 | 117250 | 0.0244 | - |
| 9.3653 | 117300 | 0.0002 | - |
| 9.3693 | 117350 | 0.0188 | - |
| 9.3733 | 117400 | 0.0001 | - |
| 9.3772 | 117450 | 0.0003 | - |
| 9.3812 | 117500 | 0.001 | - |
| 9.3852 | 117550 | 0.0 | - |
| 9.3892 | 117600 | 0.0001 | - |
| 9.3932 | 117650 | 0.0001 | - |
| 9.3972 | 117700 | 0.0003 | - |
| 9.4012 | 117750 | 0.0029 | - |
| 9.4052 | 117800 | 0.0003 | - |
| 9.4092 | 117850 | 0.0026 | - |
| 9.4132 | 117900 | 0.0019 | - |
| 9.4172 | 117950 | 0.0002 | - |
| 9.4212 | 118000 | 0.0007 | - |
| 9.4251 | 118050 | 0.0 | - |
| 9.4291 | 118100 | 0.0019 | - |
| 9.4331 | 118150 | 0.004 | - |
| 9.4371 | 118200 | 0.001 | - |
| 9.4411 | 118250 | 0.0016 | - |
| 9.4451 | 118300 | 0.0028 | - |
| 9.4491 | 118350 | 0.0001 | - |
| 9.4531 | 118400 | 0.0 | - |
| 9.4571 | 118450 | 0.0105 | - |
| 9.4611 | 118500 | 0.0013 | - |
| 9.4651 | 118550 | 0.0 | - |
| 9.4691 | 118600 | 0.0221 | - |
| 9.4731 | 118650 | 0.0001 | - |
| 9.4770 | 118700 | 0.0008 | - |
| 9.4810 | 118750 | 0.0001 | - |
| 9.4850 | 118800 | 0.0214 | - |
| 9.4890 | 118850 | 0.0001 | - |
| 9.4930 | 118900 | 0.0018 | - |
| 9.4970 | 118950 | 0.0011 | - |
| 9.5010 | 119000 | 0.0001 | - |
| 9.5050 | 119050 | 0.0009 | - |
| 9.5090 | 119100 | 0.0004 | - |
| 9.5130 | 119150 | 0.0004 | - |
| 9.5170 | 119200 | 0.0034 | - |
| 9.5210 | 119250 | 0.0016 | - |
| 9.5250 | 119300 | 0.0006 | - |
| 9.5289 | 119350 | 0.0 | - |
| 9.5329 | 119400 | 0.0001 | - |
| 9.5369 | 119450 | 0.0041 | - |
| 9.5409 | 119500 | 0.0029 | - |
| 9.5449 | 119550 | 0.0001 | - |
| 9.5489 | 119600 | 0.0189 | - |
| 9.5529 | 119650 | 0.0001 | - |
| 9.5569 | 119700 | 0.0 | - |
| 9.5609 | 119750 | 0.0 | - |
| 9.5649 | 119800 | 0.0042 | - |
| 9.5689 | 119850 | 0.0009 | - |
| 9.5729 | 119900 | 0.0 | - |
| 9.5768 | 119950 | 0.0 | - |
| 9.5808 | 120000 | 0.0 | - |
| 9.5848 | 120050 | 0.0007 | - |
| 9.5888 | 120100 | 0.0009 | - |
| 9.5928 | 120150 | 0.0006 | - |
| 9.5968 | 120200 | 0.0001 | - |
| 9.6008 | 120250 | 0.0001 | - |
| 9.6048 | 120300 | 0.0007 | - |
| 9.6088 | 120350 | 0.0001 | - |
| 9.6128 | 120400 | 0.0025 | - |
| 9.6168 | 120450 | 0.0136 | - |
| 9.6208 | 120500 | 0.0011 | - |
| 9.6248 | 120550 | 0.002 | - |
| 9.6287 | 120600 | 0.001 | - |
| 9.6327 | 120650 | 0.0008 | - |
| 9.6367 | 120700 | 0.0298 | - |
| 9.6407 | 120750 | 0.009 | - |
| 9.6447 | 120800 | 0.0042 | - |
| 9.6487 | 120850 | 0.0011 | - |
| 9.6527 | 120900 | 0.0089 | - |
| 9.6567 | 120950 | 0.0054 | - |
| 9.6607 | 121000 | 0.0019 | - |
| 9.6647 | 121050 | 0.0006 | - |
| 9.6687 | 121100 | 0.0 | - |
| 9.6727 | 121150 | 0.0 | - |
| 9.6766 | 121200 | 0.0001 | - |
| 9.6806 | 121250 | 0.0001 | - |
| 9.6846 | 121300 | 0.0 | - |
| 9.6886 | 121350 | 0.0128 | - |
| 9.6926 | 121400 | 0.0 | - |
| 9.6966 | 121450 | 0.0001 | - |
| 9.7006 | 121500 | 0.0 | - |
| 9.7046 | 121550 | 0.0007 | - |
| 9.7086 | 121600 | 0.0001 | - |
| 9.7126 | 121650 | 0.0001 | - |
| 9.7166 | 121700 | 0.0021 | - |
| 9.7206 | 121750 | 0.0001 | - |
| 9.7246 | 121800 | 0.0207 | - |
| 9.7285 | 121850 | 0.0001 | - |
| 9.7325 | 121900 | 0.0032 | - |
| 9.7365 | 121950 | 0.0008 | - |
| 9.7405 | 122000 | 0.0038 | - |
| 9.7445 | 122050 | 0.0005 | - |
| 9.7485 | 122100 | 0.0002 | - |
| 9.7525 | 122150 | 0.0005 | - |
| 9.7565 | 122200 | 0.0043 | - |
| 9.7605 | 122250 | 0.0003 | - |
| 9.7645 | 122300 | 0.021 | - |
| 9.7685 | 122350 | 0.0128 | - |
| 9.7725 | 122400 | 0.0032 | - |
| 9.7764 | 122450 | 0.0001 | - |
| 9.7804 | 122500 | 0.0 | - |
| 9.7844 | 122550 | 0.0119 | - |
| 9.7884 | 122600 | 0.0 | - |
| 9.7924 | 122650 | 0.0 | - |
| 9.7964 | 122700 | 0.0 | - |
| 9.8004 | 122750 | 0.0092 | - |
| 9.8044 | 122800 | 0.0001 | - |
| 9.8084 | 122850 | 0.0008 | - |
| 9.8124 | 122900 | 0.0009 | - |
| 9.8164 | 122950 | 0.0021 | - |
| 9.8204 | 123000 | 0.0 | - |
| 9.8244 | 123050 | 0.0174 | - |
| 9.8283 | 123100 | 0.0001 | - |
| 9.8323 | 123150 | 0.0095 | - |
| 9.8363 | 123200 | 0.0183 | - |
| 9.8403 | 123250 | 0.0001 | - |
| 9.8443 | 123300 | 0.0002 | - |
| 9.8483 | 123350 | 0.0 | - |
| 9.8523 | 123400 | 0.0004 | - |
| 9.8563 | 123450 | 0.0 | - |
| 9.8603 | 123500 | 0.0001 | - |
| 9.8643 | 123550 | 0.0028 | - |
| 9.8683 | 123600 | 0.0 | - |
| 9.8723 | 123650 | 0.0001 | - |
| 9.8762 | 123700 | 0.0 | - |
| 9.8802 | 123750 | 0.0004 | - |
| 9.8842 | 123800 | 0.0035 | - |
| 9.8882 | 123850 | 0.0001 | - |
| 9.8922 | 123900 | 0.0 | - |
| 9.8962 | 123950 | 0.0001 | - |
| 9.9002 | 124000 | 0.0038 | - |
| 9.9042 | 124050 | 0.0028 | - |
| 9.9082 | 124100 | 0.0002 | - |
| 9.9122 | 124150 | 0.0001 | - |
| 9.9162 | 124200 | 0.0 | - |
| 9.9202 | 124250 | 0.0005 | - |
| 9.9242 | 124300 | 0.016 | - |
| 9.9281 | 124350 | 0.0001 | - |
| 9.9321 | 124400 | 0.0001 | - |
| 9.9361 | 124450 | 0.0 | - |
| 9.9401 | 124500 | 0.0009 | - |
| 9.9441 | 124550 | 0.0 | - |
| 9.9481 | 124600 | 0.0015 | - |
| 9.9521 | 124650 | 0.0 | - |
| 9.9561 | 124700 | 0.0 | - |
| 9.9601 | 124750 | 0.0002 | - |
| 9.9641 | 124800 | 0.0 | - |
| 9.9681 | 124850 | 0.0028 | - |
| 9.9721 | 124900 | 0.0004 | - |
| 9.9760 | 124950 | 0.014 | - |
| 9.9800 | 125000 | 0.0138 | - |
| 9.9840 | 125050 | 0.0008 | - |
| 9.9880 | 125100 | 0.0001 | - |
| 9.9920 | 125150 | 0.0 | - |
| 9.9960 | 125200 | 0.0136 | - |
| 10.0 | 125250 | 0.0182 | 0.0777 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- Transformers: 4.39.0
- PyTorch: 2.4.0
- Datasets: 2.21.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
jeanviet/lora-jeanviet-flux1
|
jeanviet
| 2024-08-30T22:03:54Z | 5 | 1 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T21:57:53Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# lora jeanviet flux1
<Gallery />
## Model description
Flux Lora de Jeanviet
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/jeanviet/lora-jeanviet-flux1/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
|
nagthgr8/subject-gpt2
|
nagthgr8
| 2024-08-30T21:54:51Z | 116 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T21:54:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jimmycarter/flux-training-seuss-lora-bsz-4
|
jimmycarter
| 2024-08-30T21:45:45Z | 5 | 0 |
diffusers
|
[
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T21:20:00Z |
---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'Dr. Seuss during a book signing event, seated at a table with an open book and pen in hand, his characteristic white beard, clear-rimmed glasses, and whimsical bow tie complementing his calm, attentive expression, all within the literary setting of a bookstore, reflecting his enduring connection with readers and the joy his work brought to many.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
- text: 'Anime picture of famed author Dr. Seuss in a Studio Ghibli style'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_2_0.png
- text: 'Dr. Seuss in a leather jacket riding a Harley Davidson Motorcycle'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_3_0.png
- text: 'Famous author Dr. Seuss holding a chainsaw while riding around on a unicycle, vintage TV still from the Dick Van Dyke show'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_4_0.png
- text: 'A photograph of Dr. Seuss riding in a horse-drawn carriage'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_5_0.png
---
# flux-training-seuss-lora-bsz-4
This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
A photograph of Dr. Seuss riding in a horse-drawn carriage
```
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `15`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 0
- Training steps: 200
- Learning rate: 0.0008
- Effective batch size: 4
- Micro-batch size: 4
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: bf16
- Quantised: No
- Xformers: Not used
- LoRA Rank: 16
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### default_dataset_arb
- Repeats: 100
- Total number of images: 4
- Total number of aspect buckets: 3
- Resolution: 1.5 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
### default_dataset
- Repeats: 100
- Total number of images: 3
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_512
- Repeats: 100
- Total number of images: 4
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_576
- Repeats: 100
- Total number of images: 4
- Total number of aspect buckets: 1
- Resolution: 0.331776 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_640
- Repeats: 100
- Total number of images: 4
- Total number of aspect buckets: 1
- Resolution: 0.4096 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_704
- Repeats: 100
- Total number of images: 4
- Total number of aspect buckets: 1
- Resolution: 0.495616 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_768
- Repeats: 100
- Total number of images: 3
- Total number of aspect buckets: 1
- Resolution: 0.589824 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_832
- Repeats: 100
- Total number of images: 3
- Total number of aspect buckets: 1
- Resolution: 0.692224 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_896
- Repeats: 100
- Total number of images: 3
- Total number of aspect buckets: 1
- Resolution: 0.802816 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### default_dataset_960
- Repeats: 100
- Total number of images: 3
- Total number of aspect buckets: 1
- Resolution: 0.9216 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'jimmycarter/flux-training-seuss-lora-bsz-4'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "A photograph of Dr. Seuss riding in a horse-drawn carriage"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=15,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
|
Sharan1712/llama2_7B_unnaturalcore_qdora_loftq_4bit_6c
|
Sharan1712
| 2024-08-30T21:42:44Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-08-30T21:40:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rodrigo1771/BioLinkBERT-base-drugtemist-en-ner
|
Rodrigo1771
| 2024-08-30T21:42:27Z | 6 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:Rodrigo1771/drugtemist-en-ner",
"base_model:michiyasunaga/BioLinkBERT-base",
"base_model:finetune:michiyasunaga/BioLinkBERT-base",
"license:apache-2.0",
"model-index",
"region:us"
] |
token-classification
| 2024-08-30T21:24:16Z |
---
license: apache-2.0
base_model: michiyasunaga/BioLinkBERT-base
tags:
- token-classification
- generated_from_trainer
datasets:
- Rodrigo1771/drugtemist-en-ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: output
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: Rodrigo1771/drugtemist-en-ner
type: Rodrigo1771/drugtemist-en-ner
config: DrugTEMIST English NER
split: validation
args: DrugTEMIST English NER
metrics:
- name: Precision
type: precision
value: 0.9327102803738317
- name: Recall
type: recall
value: 0.9301025163094129
- name: F1
type: f1
value: 0.9314045730284647
- name: Accuracy
type: accuracy
value: 0.9986953367008066
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the Rodrigo1771/drugtemist-en-ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0056
- Precision: 0.9327
- Recall: 0.9301
- F1: 0.9314
- Accuracy: 0.9987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 434 | 0.0057 | 0.8938 | 0.8938 | 0.8938 | 0.9981 |
| 0.0182 | 2.0 | 868 | 0.0044 | 0.9024 | 0.9301 | 0.9160 | 0.9985 |
| 0.0039 | 3.0 | 1302 | 0.0045 | 0.9129 | 0.9282 | 0.9205 | 0.9987 |
| 0.0024 | 4.0 | 1736 | 0.0051 | 0.8821 | 0.9348 | 0.9077 | 0.9983 |
| 0.0017 | 5.0 | 2170 | 0.0057 | 0.9251 | 0.9320 | 0.9285 | 0.9986 |
| 0.0012 | 6.0 | 2604 | 0.0061 | 0.9001 | 0.9236 | 0.9117 | 0.9984 |
| 0.0009 | 7.0 | 3038 | 0.0056 | 0.9327 | 0.9301 | 0.9314 | 0.9987 |
| 0.0009 | 8.0 | 3472 | 0.0068 | 0.9118 | 0.9348 | 0.9231 | 0.9986 |
| 0.0006 | 9.0 | 3906 | 0.0072 | 0.9267 | 0.9310 | 0.9289 | 0.9987 |
| 0.0004 | 10.0 | 4340 | 0.0073 | 0.9192 | 0.9329 | 0.9260 | 0.9986 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/yaoi-v1-instruct-GGUF
|
mradermacher
| 2024-08-30T21:41:09Z | 18 | 1 |
transformers
|
[
"transformers",
"gguf",
"code",
"yaoi",
"en",
"base_model:Ichate/yaoi-v1-instruct",
"base_model:quantized:Ichate/yaoi-v1-instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-08-30T12:31:07Z |
---
base_model: Ichate/yaoi-v1-instruct
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
- yaoi
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ichate/yaoi-v1-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/yaoi-v1-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jeanviet/flux-lora
|
jeanviet
| 2024-08-30T21:35:51Z | 6 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T21:35:45Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Portrait Photo
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux lora
<Gallery />
## Model description
Flux Lora de Jeanviet
## Trigger words
You should use `Portrait Photo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/jeanviet/flux-lora/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-general-training](https://fal.ai/models/fal-ai/flux-lora-general-training).
|
RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf
|
RichardErkhov
| 2024-08-30T21:20:17Z | 74 | 0 | null |
[
"gguf",
"arxiv:2401.06466",
"endpoints_compatible",
"region:us"
] | null | 2024-08-30T19:08:18Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
PersianMind-v1.0 - GGUF
- Model creator: https://huggingface.co/universitytehran/
- Original model: https://huggingface.co/universitytehran/PersianMind-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [PersianMind-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q2_K.gguf) | Q2_K | 2.4GB |
| [PersianMind-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.IQ3_XS.gguf) | IQ3_XS | 2.65GB |
| [PersianMind-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.IQ3_S.gguf) | IQ3_S | 2.79GB |
| [PersianMind-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q3_K_S.gguf) | Q3_K_S | 2.79GB |
| [PersianMind-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.IQ3_M.gguf) | IQ3_M | 2.95GB |
| [PersianMind-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q3_K.gguf) | Q3_K | 3.12GB |
| [PersianMind-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q3_K_M.gguf) | Q3_K_M | 3.12GB |
| [PersianMind-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q3_K_L.gguf) | Q3_K_L | 3.4GB |
| [PersianMind-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.IQ4_XS.gguf) | IQ4_XS | 3.45GB |
| [PersianMind-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q4_0.gguf) | Q4_0 | 3.61GB |
| [PersianMind-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.IQ4_NL.gguf) | IQ4_NL | 3.63GB |
| [PersianMind-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q4_K_S.gguf) | Q4_K_S | 3.64GB |
| [PersianMind-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q4_K.gguf) | Q4_K | 3.85GB |
| [PersianMind-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q4_K_M.gguf) | Q4_K_M | 3.85GB |
| [PersianMind-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q4_1.gguf) | Q4_1 | 4.0GB |
| [PersianMind-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q5_0.gguf) | Q5_0 | 4.39GB |
| [PersianMind-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q5_K_S.gguf) | Q5_K_S | 4.39GB |
| [PersianMind-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q5_K.gguf) | Q5_K | 4.51GB |
| [PersianMind-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q5_K_M.gguf) | Q5_K_M | 4.51GB |
| [PersianMind-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q5_1.gguf) | Q5_1 | 4.77GB |
| [PersianMind-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q6_K.gguf) | Q6_K | 5.21GB |
| [PersianMind-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/universitytehran_-_PersianMind-v1.0-gguf/blob/main/PersianMind-v1.0.Q8_0.gguf) | Q8_0 | 6.75GB |
Original model description:
---
license: cc-by-nc-sa-4.0
language:
- multilingual
- fa
- en
library_name: transformers
tags:
- text-generation-inference
inference: false
metrics:
- bleu
- comet
- accuracy
- perplexity
- spearmanr
pipeline_tag: text-generation
co2_eq_emissions:
emissions: 232380
---
<p align="center">
<img src="PersianMind.jpg" alt="PersianMind logo" width=200/>
</p>
# <span style="font-variant:small-caps;">PersianMind</span>
<span style="font-variant:small-caps;">PersianMind</span> is a cross-lingual Persian-English large language model.
The model achieves state-of-the-art results on Persian subset of the [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) benchmark
and the [ParsiNLU multiple-choice QA](https://github.com/persiannlp/parsinlu) task.
It also attains performance comparable to GPT-3.5-turbo in a Persian reading comprehension task.
## Model Description
- **Developed by:** [Pedram Rostami](mailto:pedram.rostami@ut.ac.ir), [Ali Salemi](mailto:alisalemi@ut.ac.ir), and [Mohammad Javad Dousti](mailto:mjdousti@ut.ac.ir)
- **Model type:** Language model
- **Languages:** English and Persian
- **License:** [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) (non-commercial use only.)
## How to Get Started with the Model
Use the code below to get started with the model.
Note that you need to install <code><b>sentencepiece</b></code> and <code><b>accelerate</b></code> libraries along with <code><b>PyTorch</b></code> and <code><b>🤗Transformers</b></code> to run this code.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"universitytehran/PersianMind-v1.0",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
device_map={"": device},
)
tokenizer = AutoTokenizer.from_pretrained(
"universitytehran/PersianMind-v1.0",
)
TEMPLATE = "{context}\nYou: {prompt}\nPersianMind: "
CONTEXT = "This is a conversation with PersianMind. It is an artificial intelligence model designed by a team of " \
"NLP experts at the University of Tehran to help you with various tasks such as answering questions, " \
"providing recommendations, and helping with decision making. You can ask it anything you want and " \
"it will do its best to give you accurate and relevant information."
PROMPT = "در مورد هوش مصنوعی توضیح بده."
model_input = TEMPLATE.format(context=CONTEXT, prompt=PROMPT)
input_tokens = tokenizer(model_input, return_tensors="pt")
input_tokens = input_tokens.to(device)
generate_ids = model.generate(**input_tokens, max_new_tokens=512, do_sample=False, repetition_penalty=1.1)
model_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(model_output[len(model_input):])
```
### How to Quantize the Model
Quantized models can be run on resource-constrained devices.
To quantize the model, you should install the <code><b>bitsandbytes</b></code> library.
In order to quantize the model in 8-bit (`INT8`), use the code below.
```python
model = AutoModelForCausalLM.from_pretrained(
"universitytehran/PersianMind-v1.0",
device_map="auto",
low_cpu_mem_usage=True,
load_in_8bit=True
)
```
Alternatively, you can quantize the model in 4-bit (`NormalFloat4`) with the following code.
```python
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
model = AutoModelForCausalLM.from_pretrained(
"universitytehran/PersianMind-v1.0",
quantization_config=quantization_config,
device_map="auto"
)
```
### Evaluating Quantized Models
| Model | <span style="font-variant:small-caps;">Belebele</span> (Persian) | Fa→En Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | En→Fa Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | Model Size | Tokens/sec |
| :----------------------------------------------------------------: | :--------------------------------------------------------------: | :------------------------------------------------------------------------: | :------------------------------------------------------------------------: | :--------: | :--------: |
| <span style="font-variant:small-caps;">PersianMind</span> (`BF16`) | 73.9 | 83.61 | 79.44 | 13.7G | 25.35 |
| <span style="font-variant:small-caps;">PersianMind</span> (`INT8`) | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 |
| <span style="font-variant:small-caps;">PersianMind</span> (`NF4`) | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 |
We evaluated quantized models in various tasks against the original model.
Specifically, we evaluated all models using the reading comprehension multiple-choice
question-answering benchmark of [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) (Persian subset) and reported the accuracy of each model.
Additionally, we evaluated our models for Persian-to-English and English-to-Persian translation tasks.
For this, we utilized the Persian-English subset of the [<span style="font-variant:small-caps;">Flores</span>-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset and
reported our results using the <span style="font-variant:small-caps;">Comet</span> metric.
Furthermore, we calculated the average number of generated tokens per second by each model during running the translation tasks.
To understand resource efficiency, we measured the memory usage of each model by employing the `get_memory_footprint()` function.
## License
<span style="font-variant:small-caps;">PersianMind</span> is subject to Meta's [LLaMa2 Community License](https://raw.githubusercontent.com/facebookresearch/llama/main/LICENSE).
It is further licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/), which allows non-commercial use of the model.
Commercial use of this model requires written agreement which must be obtained from the copyright holders who are listed as developers in this page.
If you suspect any violations, please reach out to us.
## Citation
If you find this model helpful, please ensure to cite the following paper.
**BibTeX:**
```bibtex
@misc{persianmind,
title={{PersianMind: A Cross-Lingual Persian-English Large Language Model}},
author={Rostami, Pedram and Salemi, Ali and Dousti, Mohammad Javad},
year={2024}
eprint={2401.06466},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
clydeiii/scott-stapp-flux
|
clydeiii
| 2024-08-30T21:18:57Z | 81 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-23T18:41:24Z |
---
tags:
- text-to-image
- lora
- diffusers
- flux
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: p3r5on
widget:
- text: p3r5on
license: other
---
### Scott Stapp
#### Flux LoRA
---
# Prompt: p3r5on
#### Sample pictures:




|
mradermacher/MN-12B-Lyra-v3-GGUF
|
mradermacher
| 2024-08-30T21:14:13Z | 10 | 3 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Sao10K/MN-12B-Lyra-v3",
"base_model:quantized:Sao10K/MN-12B-Lyra-v3",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-29T07:52:37Z |
---
base_model: Sao10K/MN-12B-Lyra-v3
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Sao10K/MN-12B-Lyra-v3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MN-12B-Lyra-v3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.IQ3_XS.gguf) | IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.IQ3_M.gguf) | IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Lyra-v3-GGUF/resolve/main/MN-12B-Lyra-v3.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Xilixmeaty40/vfdfdfdfdfd
|
Xilixmeaty40
| 2024-08-30T21:10:18Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"base_model:vinthony/SadTalker",
"base_model:finetune:vinthony/SadTalker",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T20:26:42Z |
---
license: apache-2.0
base_model: vinthony/SadTalker
library_name: transformers
---
|
LouisSanna/orpo-model-output
|
LouisSanna
| 2024-08-30T20:57:31Z | 119 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"trl",
"orpo",
"generated_from_trainer",
"dataset:piqa",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T20:43:47Z |
---
library_name: transformers
license: mit
base_model: openai-community/gpt2
tags:
- trl
- orpo
- generated_from_trainer
datasets:
- piqa
model-index:
- name: orpo-model-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# orpo-model-output
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the piqa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2999
- Rewards/chosen: -0.3182
- Rewards/rejected: -0.3369
- Rewards/accuracies: 0.6693
- Rewards/margins: 0.0187
- Logps/rejected: -3.3688
- Logps/chosen: -3.1821
- Logits/rejected: -24.5739
- Logits/chosen: -24.6603
- Nll Loss: 3.2330
- Log Odds Ratio: -0.6695
- Log Odds Chosen: 0.2013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
| 3.1799 | 1.0 | 3223 | 3.1575 | -0.2992 | -0.3103 | 0.6472 | 0.0111 | -3.1030 | -2.9917 | -16.2781 | -16.3180 | 3.0900 | -0.6758 | 0.1201 |
| 2.7077 | 2.0 | 6446 | 3.1544 | -0.3005 | -0.3160 | 0.6652 | 0.0154 | -3.1595 | -3.0051 | -21.7517 | -21.8387 | 3.0878 | -0.6671 | 0.1676 |
| 2.2691 | 3.0 | 9669 | 3.2999 | -0.3182 | -0.3369 | 0.6693 | 0.0187 | -3.3688 | -3.1821 | -24.5739 | -24.6603 | 3.2330 | -0.6695 | 0.2013 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.19.1
|
ambrosfitz/tinyllama-history
|
ambrosfitz
| 2024-08-30T20:53:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:ambrosfitz/just_history_v2",
"dataset:ambrosfitz/synth_history_sentences",
"dataset:ambrosfitz/ps_history_txt",
"dataset:ambrosfitz/might-history-merge_v2",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T00:43:28Z |
---
library_name: transformers
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: tinyllama-history
results: []
datasets:
- ambrosfitz/just_history_v2
- ambrosfitz/synth_history_sentences
- ambrosfitz/ps_history_txt
- ambrosfitz/might-history-merge_v2
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-history
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
stablediffusionapi/oxalis-anime-hentai-model
|
stablediffusionapi
| 2024-08-30T20:45:20Z | 32 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T20:40:06Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Oxalis Anime Hentai Model API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "oxalis-anime-hentai-model"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/oxalis-anime-hentai-model)
Model link: [View model](https://modelslab.com/models/oxalis-anime-hentai-model)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "oxalis-anime-hentai-model",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
stablediffusionapi/falkons-anime-and-hentai
|
stablediffusionapi
| 2024-08-30T20:45:20Z | 29 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T20:40:53Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Falkons (Anime and Hentai) API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "falkons-anime-and-hentai"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/falkons-anime-and-hentai)
Model link: [View model](https://modelslab.com/models/falkons-anime-and-hentai)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "falkons-anime-and-hentai",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
bartowski/c4ai-command-r-plus-08-2024-GGUF
|
bartowski
| 2024-08-30T20:45:08Z | 5,335 | 19 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"base_model:CohereForAI/c4ai-command-r-plus-08-2024",
"base_model:quantized:CohereForAI/c4ai-command-r-plus-08-2024",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-08-30T14:28:25Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
quantized_by: bartowski
pipeline_tag: text-generation
base_model: CohereForAI/c4ai-command-r-plus-08-2024
---
## Llamacpp imatrix Quantizations of c4ai-command-r-plus-08-2024
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3634">b3634</a> for quantization.
Original model: https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{system_prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|><|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [c4ai-command-r-plus-08-2024-Q8_0.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q8_0) | Q8_0 | 110.31GB | true | Extremely high quality, generally unneeded but max available quant. |
| [c4ai-command-r-plus-08-2024-Q6_K.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q6_K) | Q6_K | 85.17GB | true | Very high quality, near perfect, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q5_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q5_K_M) | Q5_K_M | 73.62GB | true | High quality, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q4_K_L.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_K_L) | Q4_K_L | 63.51GB | true | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q4_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_K_M) | Q4_K_M | 62.75GB | true | Good quality, default size for must use cases, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q4_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_K_S) | Q4_K_S | 59.64GB | true | Slightly lower quality with more space savings, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q4_0.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_0) | Q4_0 | 59.43GB | true | Legacy format, generally not worth using over similarly sized formats |
| [c4ai-command-r-plus-08-2024-Q4_0_4_8.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_0_4_8) | Q4_0_4_8 | 59.22GB | true | Optimized for ARM and CPU inference, much faster than Q4_0 at similar quality. |
| [c4ai-command-r-plus-08-2024-Q4_0_4_4.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q4_0_4_4) | Q4_0_4_4 | 59.22GB | true | Optimized for ARM and CPU inference, much faster than Q4_0 at similar quality. |
| [c4ai-command-r-plus-08-2024-IQ4_XS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-IQ4_XS) | IQ4_XS | 56.20GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [c4ai-command-r-plus-08-2024-Q3_K_XL.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q3_K_XL) | Q3_K_XL | 56.16GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [c4ai-command-r-plus-08-2024-Q3_K_L.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q3_K_L) | Q3_K_L | 55.40GB | true | Lower quality but usable, good for low RAM availability. |
| [c4ai-command-r-plus-08-2024-Q3_K_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/tree/main/c4ai-command-r-plus-08-2024-Q3_K_M) | Q3_K_M | 50.98GB | true | Low quality. |
| [c4ai-command-r-plus-08-2024-IQ3_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ3_M.gguf) | IQ3_M | 47.68GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [c4ai-command-r-plus-08-2024-Q3_K_S.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-Q3_K_S.gguf) | Q3_K_S | 45.85GB | false | Low quality, not recommended. |
| [c4ai-command-r-plus-08-2024-IQ3_XXS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ3_XXS.gguf) | IQ3_XXS | 40.66GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [c4ai-command-r-plus-08-2024-Q2_K_L.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-Q2_K_L.gguf) | Q2_K_L | 40.26GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [c4ai-command-r-plus-08-2024-Q2_K.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-Q2_K.gguf) | Q2_K | 39.50GB | false | Very low quality but surprisingly usable. |
| [c4ai-command-r-plus-08-2024-IQ2_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ2_M.gguf) | IQ2_M | 36.04GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [c4ai-command-r-plus-08-2024-IQ2_XS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ2_XS.gguf) | IQ2_XS | 31.63GB | false | Low quality, uses SOTA techniques to be usable. |
| [c4ai-command-r-plus-08-2024-IQ2_XXS.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ2_XXS.gguf) | IQ2_XXS | 28.61GB | false | Very low quality, uses SOTA techniques to be usable. |
| [c4ai-command-r-plus-08-2024-IQ1_M.gguf](https://huggingface.co/bartowski/c4ai-command-r-plus-08-2024-GGUF/blob/main/c4ai-command-r-plus-08-2024-IQ1_M.gguf) | IQ1_M | 25.22GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/c4ai-command-r-plus-08-2024-GGUF --include "c4ai-command-r-plus-08-2024-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/c4ai-command-r-plus-08-2024-GGUF --include "c4ai-command-r-plus-08-2024-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (c4ai-command-r-plus-08-2024-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
stablediffusionapi/kawaii-realistic-anime-mi
|
stablediffusionapi
| 2024-08-30T20:33:23Z | 30 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T20:29:59Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Kawaii Realistic Anime Mix API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "kawaii-realistic-anime-mi"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/kawaii-realistic-anime-mi)
Model link: [View model](https://modelslab.com/models/kawaii-realistic-anime-mi)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "kawaii-realistic-anime-mi",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
taku-yoshioka/rlhf-line-marcja-0828
|
taku-yoshioka
| 2024-08-30T20:19:51Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-08-29T00:52:48Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="taku-yoshioka//tmp/tmpxviy4d9k/taku-yoshioka/rlhf-line-marcja-0828")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("taku-yoshioka//tmp/tmpxviy4d9k/taku-yoshioka/rlhf-line-marcja-0828")
model = AutoModelForCausalLMWithValueHead.from_pretrained("taku-yoshioka//tmp/tmpxviy4d9k/taku-yoshioka/rlhf-line-marcja-0828")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
Xilixmeaty40/sad
|
Xilixmeaty40
| 2024-08-30T20:16:41Z | 76 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"code",
"chemistry",
"biology",
"music",
"es",
"dataset:fka/awesome-chatgpt-prompts",
"dataset:LLLM-Lab/huanhuan_sadtalker",
"arxiv:1910.09700",
"base_model:vinthony/SadTalker",
"base_model:finetune:vinthony/SadTalker",
"license:gemma",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T20:35:23Z |
---
license: gemma
datasets:
- fka/awesome-chatgpt-prompts
- LLLM-Lab/huanhuan_sadtalker
language:
- es
metrics:
- accuracy
base_model: vinthony/SadTalker
library_name: transformers
tags:
- code
- chemistry
- biology
- music
extra_gated_heading: Access sad on Hugging Face
extra_gated_button_content: Acknowledge license
extra_gated_prompt: >-
To access sad on Hugging Face, you’re required to review and agree to Google’s
usage license. To do this, please ensure you’re logged in to Hugging Face and
click below. Requests are processed immediately.
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magnifi/Phi3_intent_v32_3_epoch_7_lr_0.002_r_16_a_16
|
magnifi
| 2024-08-30T20:02:16Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T19:59:50Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** magnifi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lgk03/WITHINAPPS_NDD-dimeshift_test-tags-CWAdj
|
lgk03
| 2024-08-30T20:01:51Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-30T19:39:40Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: WITHINAPPS_NDD-dimeshift_test-tags-CWAdj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WITHINAPPS_NDD-dimeshift_test-tags-CWAdj
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3579
- Accuracy: 0.8865
- F1: 0.9038
- Precision: 0.9379
- Recall: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.9897 | 72 | 0.4258 | 0.8624 | 0.8858 | 0.9303 | 0.8624 |
| No log | 1.9931 | 145 | 0.3954 | 0.8723 | 0.8929 | 0.9321 | 0.8723 |
| No log | 2.9966 | 218 | 0.3774 | 0.5370 | 0.6318 | 0.9294 | 0.5370 |
| No log | 4.0 | 291 | 0.3647 | 0.8865 | 0.9038 | 0.9379 | 0.8865 |
| No log | 4.9485 | 360 | 0.3579 | 0.8865 | 0.9038 | 0.9379 | 0.8865 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
unclemusclez/SmolLM-135M-Instruct-DEVINator-v0.2
|
unclemusclez
| 2024-08-30T20:00:21Z | 111 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gguf",
"llama",
"sentence-similarity",
"feature-extraction",
"autotrain",
"dataset:skratos115/opendevin_DataDevinator",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
sentence-similarity
| 2024-08-30T15:44:10Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: HuggingFaceTB/SmolLM-135M-Instruct
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
datasets:
- skratos115/opendevin_DataDevinator
---
AVAILABLE ON OLLAMA: https://ollama.com/unclemusclez/smollm-135m-instruct-devinator
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
No validation metrics available
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
unclemusclez/SmolLM-135M-Instruct-DEVINator-v0.1
|
unclemusclez
| 2024-08-30T20:00:02Z | 112 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gguf",
"llama",
"sentence-similarity",
"feature-extraction",
"autotrain",
"dataset:skratos115/opendevin_DataDevinator",
"base_model:HuggingFaceTB/SmolLM-135M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM-135M-Instruct",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
sentence-similarity
| 2024-08-30T19:10:03Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- autotrain
base_model: HuggingFaceTB/SmolLM-135M-Instruct
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
datasets:
- skratos115/opendevin_DataDevinator
---
AVAILABLE ON OLLAMA: https://ollama.com/unclemusclez/smollm-135m-instruct-devinator
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
No validation metrics available
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
jeiku/NeuroNoise_v1
|
jeiku
| 2024-08-30T19:44:38Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml",
"base_model:finetune:IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T19:33:10Z |
---
library_name: transformers
license: other
base_model: IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
tags:
- generated_from_trainer
model-index:
- name: outputs/out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: NewEden/Gryphe-3.5-16k-Subset
type: sharegpt
conversation: chatml
- path: ResplendentAI/bluemoon
type: sharegpt
conversation: chatml
- path: openerotica/freedom-rp
type: sharegpt
conversation: chatml
- path: MinervaAI/Aesir-Preview
type: sharegpt
conversation: chatml
- path: anthracite-org/stheno-filtered-v1.1
type: sharegpt
conversation: chatml
- path: NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
type: sharegpt
conversation: chatml
- path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
type: sharegpt
conversation: chatml
- path: jeiku/jeikutxt
type: completion
- path: ResplendentAI/Sissification_Hypno_1k
type: alpaca
- path: ResplendentAI/theory_of_mind_fixed_output
type: alpaca
- path: ResplendentAI/Synthetic_Soul_1k
type: alpaca
chat_template: chatml
val_set_size: 0.01
output_dir: ./outputs/out
adapter:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
sequence_len: 8192
# sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
wandb_project: Neuro4B
wandb_entity:
wandb_watch:
wandb_name: Neuro4B
wandb_log_model:
gradient_accumulation_steps: 32
micro_batch_size: 2
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_ratio: 0.1
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero3.json
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# outputs/out
This model is a fine-tuned version of [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 19
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0722 | 0.0094 | 1 | 3.0240 |
| 1.5947 | 0.2529 | 27 | 2.8578 |
| 1.4579 | 0.5057 | 54 | 2.5541 |
| 1.4422 | 0.7586 | 81 | 2.4550 |
| 1.4039 | 1.0006 | 108 | 2.4334 |
| 1.3428 | 1.2534 | 135 | 2.4217 |
| 1.3054 | 1.5063 | 162 | 2.4259 |
| 1.3378 | 1.7591 | 189 | 2.4207 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf
|
RichardErkhov
| 2024-08-30T19:40:18Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T13:59:54Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
FusionNet_SOLAR - GGUF
- Model creator: https://huggingface.co/TomGrc/
- Original model: https://huggingface.co/TomGrc/FusionNet_SOLAR/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [FusionNet_SOLAR.Q2_K.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q2_K.gguf) | Q2_K | 5.52GB |
| [FusionNet_SOLAR.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.IQ3_XS.gguf) | IQ3_XS | 6.13GB |
| [FusionNet_SOLAR.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.IQ3_S.gguf) | IQ3_S | 6.48GB |
| [FusionNet_SOLAR.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q3_K_S.gguf) | Q3_K_S | 6.44GB |
| [FusionNet_SOLAR.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.IQ3_M.gguf) | IQ3_M | 6.69GB |
| [FusionNet_SOLAR.Q3_K.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q3_K.gguf) | Q3_K | 7.18GB |
| [FusionNet_SOLAR.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q3_K_M.gguf) | Q3_K_M | 7.18GB |
| [FusionNet_SOLAR.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q3_K_L.gguf) | Q3_K_L | 7.82GB |
| [FusionNet_SOLAR.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.IQ4_XS.gguf) | IQ4_XS | 8.06GB |
| [FusionNet_SOLAR.Q4_0.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q4_0.gguf) | Q4_0 | 8.4GB |
| [FusionNet_SOLAR.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.IQ4_NL.gguf) | IQ4_NL | 8.49GB |
| [FusionNet_SOLAR.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q4_K_S.gguf) | Q4_K_S | 3.76GB |
| [FusionNet_SOLAR.Q4_K.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q4_K.gguf) | Q4_K | 8.94GB |
| [FusionNet_SOLAR.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q4_K_M.gguf) | Q4_K_M | 8.94GB |
| [FusionNet_SOLAR.Q4_1.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q4_1.gguf) | Q4_1 | 9.32GB |
| [FusionNet_SOLAR.Q5_0.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q5_0.gguf) | Q5_0 | 10.24GB |
| [FusionNet_SOLAR.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q5_K_S.gguf) | Q5_K_S | 10.24GB |
| [FusionNet_SOLAR.Q5_K.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q5_K.gguf) | Q5_K | 10.52GB |
| [FusionNet_SOLAR.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q5_K_M.gguf) | Q5_K_M | 10.52GB |
| [FusionNet_SOLAR.Q5_1.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q5_1.gguf) | Q5_1 | 11.16GB |
| [FusionNet_SOLAR.Q6_K.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q6_K.gguf) | Q6_K | 12.2GB |
| [FusionNet_SOLAR.Q8_0.gguf](https://huggingface.co/RichardErkhov/TomGrc_-_FusionNet_SOLAR-gguf/blob/main/FusionNet_SOLAR.Q8_0.gguf) | Q8_0 | 15.8GB |
Original model description:
---
language:
- en
license: mit
pipeline_tag: text-generation
model-index:
- name: FusionNet_SOLAR
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 69.21
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TomGrc/FusionNet_SOLAR
name: Open LLM Leaderboard
---
# FusionNet_SOLAR
Fine-tuned model on English language using SOLAR Fusion method.
## Model description
This is an experiment with the SOLAR Fusion method of FusionNet. This model has 16B parameters, and this model is fine-tuned. Enjoy!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TomGrc__FusionNet_SOLAR)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.08|
|AI2 Reasoning Challenge (25-Shot)|71.59|
|HellaSwag (10-Shot) |88.40|
|MMLU (5-Shot) |65.29|
|TruthfulQA (0-shot) |69.21|
|Winogrande (5-shot) |81.06|
|GSM8k (5-shot) |50.95|
|
mradermacher/Coder-2B-GGUF
|
mradermacher
| 2024-08-30T19:39:11Z | 27 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"en",
"base_model:suriya7/Coder-2B",
"base_model:quantized:suriya7/Coder-2B",
"endpoints_compatible",
"region:us"
] | null | 2024-08-30T19:30:28Z |
---
base_model: suriya7/Coder-2B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- unsloth
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/suriya7/Coder-2B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q2_K.gguf) | Q2_K | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.IQ3_XS.gguf) | IQ3_XS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.IQ3_S.gguf) | IQ3_S | 1.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q3_K_S.gguf) | Q3_K_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.IQ3_M.gguf) | IQ3_M | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q3_K_M.gguf) | Q3_K_M | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q3_K_L.gguf) | Q3_K_L | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.IQ4_XS.gguf) | IQ4_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q4_K_S.gguf) | Q4_K_S | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q4_K_M.gguf) | Q4_K_M | 1.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q5_K_S.gguf) | Q5_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q5_K_M.gguf) | Q5_K_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q6_K.gguf) | Q6_K | 2.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.Q8_0.gguf) | Q8_0 | 2.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Coder-2B-GGUF/resolve/main/Coder-2B.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mertgulexe/distributed-model
|
mertgulexe
| 2024-08-30T19:37:47Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-30T19:30:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lmstudio-community/c4ai-command-r-plus-08-2024-GGUF
|
lmstudio-community
| 2024-08-30T19:36:22Z | 485 | 5 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"base_model:CohereForAI/c4ai-command-r-plus-08-2024",
"base_model:quantized:CohereForAI/c4ai-command-r-plus-08-2024",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-08-30T16:02:51Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
quantized_by: bartowski
pipeline_tag: text-generation
base_model: CohereForAI/c4ai-command-r-plus-08-2024
lm_studio:
param_count: 105b
use_case: general
release_date: 30-08-2024
model_creator: CohereForAI
prompt_template: cohere_command_r
base_model: Cohere
system_prompt: You are a large language model called Command R built by the company Cohere. You act as a brilliant, sophisticated, AI-assistant chatbot trained to assist human users by providing thorough responses.
original_repo: CohereForAI/c4ai-command-r-plus-08-2024
---
## 💫 Community Model> C4AI Command R Plus 08-2024 by Cohere For AI
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [CohereForAI](https://huggingface.co/CohereForAI)<br>
**Original model**: [c4ai-command-r-plus-08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3634](https://github.com/ggerganov/llama.cpp/releases/tag/b3634)<br>
## Model Summary:
C4AI Command R Plus 08-2024 is an update to the originally released 105B paramater Command R. The original Command R model received sweeping praise for its incredible RAG and multilingual abilities, and this model is no different.<br>
Not for commercial use, must adhere to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
## Prompt Template:
Choose the `Cohere Command R` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
```
This model also supports tool use and RAG prompt formats. For details on formatting for those use cases, view [tool use here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024#tool-use--agent-capabilities) and [RAG capabilities here](https://huggingface.co/CohereForAI/c4ai-command-r-plus-08-2024#grounded-generation-and-rag-capabilities)
## Technical Details
C4AI Command R Plus 08-2024 has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian).
Due to this multilingual training, it excels in multilingual tasks.
Command R Plus 08-2024 supports a context length of 128K.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
### Terms of Use (directly from Cohere For AI)
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
|
Tawkat/qlora-med42llm3-nclex-august-v7
|
Tawkat
| 2024-08-30T19:36:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T16:32:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheNovak/simpletuner-lora-Elbows
|
TheNovak
| 2024-08-30T19:20:49Z | 14 | 0 |
diffusers
|
[
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T07:33:43Z |
---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- lora
- template:sd-lora
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'Elbows the cat on a box'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
---
# simpletuner-lora-Elbows
This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
Elbows the cat on a box
```
## Validation settings
- CFG: `3.0`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `None`
- Seed: `42`
- Resolution: `1024x1024`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 5
- Training steps: 8200
- Learning rate: 8e-05
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: bf16
- Quantised: No
- Xformers: Not used
- LoRA Rank: 64
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### Elbows
- Repeats: 0
- Total number of images: 1392
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'jusscubs/simpletuner-lora-Elbows'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "Elbows the cat on a box"
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1024,
height=1024,
guidance_scale=3.0,
).images[0]
image.save("output.png", format="PNG")
```
|
showvikdbz/phi-3.5-it-25k
|
showvikdbz
| 2024-08-30T19:18:06Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T19:11:53Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
geshijoker/distilbert_ner_wnut_model
|
geshijoker
| 2024-08-30T19:14:00Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-08-30T19:12:11Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert_ner_wnut_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: test
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.5339578454332553
- name: Recall
type: recall
value: 0.211306765523633
- name: F1
type: f1
value: 0.30278884462151395
- name: Accuracy
type: accuracy
value: 0.9365995468342525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_ner_wnut_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2970
- Precision: 0.5340
- Recall: 0.2113
- F1: 0.3028
- Accuracy: 0.9366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 107 | 0.3140 | 0.3744 | 0.0732 | 0.1225 | 0.9306 |
| No log | 2.0 | 214 | 0.2970 | 0.5340 | 0.2113 | 0.3028 | 0.9366 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
csikasote/mms-zeroshot-bem-sv-male
|
csikasote
| 2024-08-30T19:07:14Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"BembaSpeech",
"mms",
"generated_from_trainer",
"base_model:mms-meta/mms-zeroshot-300m",
"base_model:finetune:mms-meta/mms-zeroshot-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-30T13:55:57Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: mms-meta/mms-zeroshot-300m
tags:
- automatic-speech-recognition
- BembaSpeech
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-zeroshot-bem-sv-male
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-zeroshot-bem-sv-male
This model is a fine-tuned version of [mms-meta/mms-zeroshot-300m](https://huggingface.co/mms-meta/mms-zeroshot-300m) on the BEMBASPEECH - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1874
- Wer: 0.3949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| No log | 0.2183 | 200 | 2.3822 | 1.0 |
| No log | 0.4367 | 400 | 0.2715 | 0.5093 |
| 2.7769 | 0.6550 | 600 | 0.2489 | 0.4820 |
| 2.7769 | 0.8734 | 800 | 0.2296 | 0.4695 |
| 0.6809 | 1.0917 | 1000 | 0.2209 | 0.4638 |
| 0.6809 | 1.3100 | 1200 | 0.2163 | 0.4469 |
| 0.6809 | 1.5284 | 1400 | 0.2092 | 0.4400 |
| 0.6113 | 1.7467 | 1600 | 0.2047 | 0.4346 |
| 0.6113 | 1.9651 | 1800 | 0.2074 | 0.4467 |
| 0.5974 | 2.1834 | 2000 | 0.2041 | 0.4304 |
| 0.5974 | 2.4017 | 2200 | 0.2054 | 0.4317 |
| 0.5974 | 2.6201 | 2400 | 0.1987 | 0.4240 |
| 0.5636 | 2.8384 | 2600 | 0.2003 | 0.4252 |
| 0.5636 | 3.0568 | 2800 | 0.1997 | 0.4287 |
| 0.5398 | 3.2751 | 3000 | 0.2097 | 0.4400 |
| 0.5398 | 3.4934 | 3200 | 0.1968 | 0.4165 |
| 0.5398 | 3.7118 | 3400 | 0.2013 | 0.4218 |
| 0.5334 | 3.9301 | 3600 | 0.2003 | 0.4230 |
| 0.5334 | 4.1485 | 3800 | 0.1976 | 0.4227 |
| 0.5123 | 4.3668 | 4000 | 0.1978 | 0.4198 |
| 0.5123 | 4.5852 | 4200 | 0.2019 | 0.4298 |
| 0.5123 | 4.8035 | 4400 | 0.1939 | 0.4146 |
| 0.5119 | 5.0218 | 4600 | 0.1989 | 0.4161 |
| 0.5119 | 5.2402 | 4800 | 0.1902 | 0.4076 |
| 0.4929 | 5.4585 | 5000 | 0.1929 | 0.4116 |
| 0.4929 | 5.6769 | 5200 | 0.1943 | 0.4144 |
| 0.4929 | 5.8952 | 5400 | 0.1922 | 0.4106 |
| 0.4878 | 6.1135 | 5600 | 0.1933 | 0.4137 |
| 0.4878 | 6.3319 | 5800 | 0.1920 | 0.4058 |
| 0.4755 | 6.5502 | 6000 | 0.1927 | 0.4171 |
| 0.4755 | 6.7686 | 6200 | 0.1920 | 0.4127 |
| 0.4755 | 6.9869 | 6400 | 0.1925 | 0.4061 |
| 0.475 | 7.2052 | 6600 | 0.1884 | 0.4058 |
| 0.475 | 7.4236 | 6800 | 0.1903 | 0.4070 |
| 0.4715 | 7.6419 | 7000 | 0.1882 | 0.3996 |
| 0.4715 | 7.8603 | 7200 | 0.1881 | 0.4033 |
| 0.4715 | 8.0786 | 7400 | 0.1885 | 0.4007 |
| 0.4575 | 8.2969 | 7600 | 0.1885 | 0.4016 |
| 0.4575 | 8.5153 | 7800 | 0.1888 | 0.4050 |
| 0.4611 | 8.7336 | 8000 | 0.1884 | 0.4046 |
| 0.4611 | 8.9520 | 8200 | 0.1881 | 0.3974 |
| 0.4611 | 9.1703 | 8400 | 0.1865 | 0.3956 |
| 0.4559 | 9.3886 | 8600 | 0.1875 | 0.3974 |
| 0.4559 | 9.6070 | 8800 | 0.1872 | 0.3996 |
| 0.4536 | 9.8253 | 9000 | 0.1876 | 0.3953 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
GlobalMeltdown/MaidenlessNoMore-7B-GGUF
|
GlobalMeltdown
| 2024-08-30T18:55:11Z | 12 | 3 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"Roleplay",
"RP",
"Chat",
"text-generation-inference",
"merge ",
"text generation",
"en",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-29T21:05:23Z |
---
base_model: MaidenlessNoMore-7B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
- Roleplay
- RP
- Chat
- text-generation-inference
- 'merge '
- text generation
license: cc-by-4.0
language:
- en
---

MaidenlessNoMore-7B-GGUF was my first attempt at merging an LLM
I decided to use one of the first models I really enjoyed that not many people know of:
https://huggingface.co/cookinai/Valkyrie-V1 with my other favorite model that has been my fallback model for a long time: https://huggingface.co/SanjiWatsuki/Kunoichi-7B
This was more of an experiment than anything else. Hopefully this will lead to some more interesting merges and who knows what else in the future.
I mean we have to start somewhere right?
Alpaca or Alpaca roleplay is recommended.
# GlobalMeltdown/MaidenlessNoMore-7B-GGUF
This model was converted to GGUF format from [`GlobalMeltdown/MaidenlessNoMore-7B`](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) for more details on the model.
|
csikasote/mms-zeroshot-bem-sv-female
|
csikasote
| 2024-08-30T18:42:13Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"BembaSpeech",
"mms",
"generated_from_trainer",
"base_model:mms-meta/mms-zeroshot-300m",
"base_model:finetune:mms-meta/mms-zeroshot-300m",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-30T14:12:22Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: mms-meta/mms-zeroshot-300m
tags:
- automatic-speech-recognition
- BembaSpeech
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-zeroshot-bem-sv-female
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-zeroshot-bem-sv-female
This model is a fine-tuned version of [mms-meta/mms-zeroshot-300m](https://huggingface.co/mms-meta/mms-zeroshot-300m) on the BEMBASPEECH - BEM dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
- Wer: 0.4375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| No log | 0.3992 | 200 | 1.9434 | 1.0 |
| No log | 0.7984 | 400 | 0.3648 | 0.5411 |
| 2.5692 | 1.1976 | 600 | 0.3371 | 0.5174 |
| 2.5692 | 1.5968 | 800 | 0.3229 | 0.5213 |
| 0.3941 | 1.9960 | 1000 | 0.3183 | 0.4915 |
| 0.3941 | 2.3952 | 1200 | 0.3068 | 0.5073 |
| 0.3941 | 2.7944 | 1400 | 0.3057 | 0.4688 |
| 0.3502 | 3.1936 | 1600 | 0.3017 | 0.4777 |
| 0.3502 | 3.5928 | 1800 | 0.2905 | 0.4647 |
| 0.3253 | 3.9920 | 2000 | 0.2857 | 0.4686 |
| 0.3253 | 4.3912 | 2200 | 0.2892 | 0.4601 |
| 0.3253 | 4.7904 | 2400 | 0.2848 | 0.4759 |
| 0.3066 | 5.1896 | 2600 | 0.2801 | 0.4444 |
| 0.3066 | 5.5888 | 2800 | 0.2752 | 0.4627 |
| 0.2988 | 5.9880 | 3000 | 0.2818 | 0.4614 |
| 0.2988 | 6.3872 | 3200 | 0.2759 | 0.4444 |
| 0.2988 | 6.7864 | 3400 | 0.2751 | 0.4382 |
| 0.2877 | 7.1856 | 3600 | 0.2726 | 0.4472 |
| 0.2877 | 7.5848 | 3800 | 0.2722 | 0.4484 |
| 0.2812 | 7.9840 | 4000 | 0.2710 | 0.4344 |
| 0.2812 | 8.3832 | 4200 | 0.2734 | 0.4410 |
| 0.2812 | 8.7824 | 4400 | 0.2734 | 0.4360 |
| 0.2742 | 9.1816 | 4600 | 0.2759 | 0.4398 |
| 0.2742 | 9.5808 | 4800 | 0.2740 | 0.4337 |
| 0.2731 | 9.9800 | 5000 | 0.2722 | 0.4382 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
lgk03/WITHINAPPS_NDD-mrbs_test-tags-CWAdj
|
lgk03
| 2024-08-30T18:21:47Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-30T18:00:13Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: WITHINAPPS_NDD-mrbs_test-tags-CWAdj
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# WITHINAPPS_NDD-mrbs_test-tags-CWAdj
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0932
- Accuracy: 0.8764
- F1: 0.8787
- Precision: 0.9031
- Recall: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 71 | 0.0974 | 0.8764 | 0.8787 | 0.9031 | 0.8764 |
| No log | 2.0 | 142 | 0.0972 | 0.8764 | 0.8787 | 0.9031 | 0.8764 |
| No log | 3.0 | 213 | 0.0957 | 0.8764 | 0.8787 | 0.9031 | 0.8764 |
| No log | 4.0 | 284 | 0.0947 | 0.8764 | 0.8787 | 0.9031 | 0.8764 |
| No log | 5.0 | 355 | 0.0932 | 0.8764 | 0.8787 | 0.9031 | 0.8764 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
arshiakarimian1/spam-student-4-256
|
arshiakarimian1
| 2024-08-30T18:16:01Z | 173 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T23:15:08Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: spam-student-4-256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam-student-4-256
This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
netcat420/MFANNv0.20.12
|
netcat420
| 2024-08-30T18:14:25Z | 17 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:merge:mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated",
"base_model:netcat420/MFANNv0.19",
"base_model:merge:netcat420/MFANNv0.19",
"base_model:netcat420/MFANNv0.20",
"base_model:merge:netcat420/MFANNv0.20",
"base_model:netcat420/MFANNv0.20.11",
"base_model:merge:netcat420/MFANNv0.20.11",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T18:09:21Z |
---
base_model:
- netcat420/MFANNv0.19
- netcat420/MFANNv0.20.11
- netcat420/MFANNv0.20
- mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated) as a base.
### Models Merged
The following models were included in the merge:
* [netcat420/MFANNv0.19](https://huggingface.co/netcat420/MFANNv0.19)
* [netcat420/MFANNv0.20.11](https://huggingface.co/netcat420/MFANNv0.20.11)
* [netcat420/MFANNv0.20](https://huggingface.co/netcat420/MFANNv0.20)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: netcat420/MFANNv0.20.11
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANNv0.20
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
- model: netcat420/MFANNv0.19
parameters:
density: [1, 0.7, 0.1] # density gradient
weight: 1.0
merge_method: ties
base_model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LearnerSX/847_capstone_tweets_bert_v2
|
LearnerSX
| 2024-08-30T17:35:49Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-19T01:20:05Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: 847_capstone_tweets_bert_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 847_capstone_tweets_bert_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6962
- Accuracy: 0.8508
- F1: 0.8487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gglabs/Mistral-Nemo-12B-FC-Chat-0830-1-epoch
|
gglabs
| 2024-08-30T17:27:37Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:quantized:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T16:57:25Z |
---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jacobposel/cc-rl-pattern-2
|
jacobposel
| 2024-08-30T17:20:15Z | 7 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-30T16:12:53Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
instance_prompt: <lora:RL>
---
# Cc Rl Pattern 2
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `<lora:RL>` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jacobposel/cc-rl-pattern-2', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Huseyin/bert-base-uncased-finetuned-ner
|
Huseyin
| 2024-08-30T17:15:38Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-07T11:39:18Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3910
- Precision: 0.9616
- Recall: 0.9637
- F1: 0.9627
- Accuracy: 0.9560
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3052 | 1.0 | 3334 | 0.2630 | 0.9365 | 0.9367 | 0.9366 | 0.9228 |
| 0.2104 | 2.0 | 6668 | 0.2481 | 0.9418 | 0.9537 | 0.9477 | 0.9400 |
| 0.163 | 3.0 | 10002 | 0.2390 | 0.9495 | 0.9606 | 0.9550 | 0.9479 |
| 0.1151 | 4.0 | 13336 | 0.2516 | 0.9549 | 0.9616 | 0.9583 | 0.9515 |
| 0.0809 | 5.0 | 16670 | 0.2887 | 0.9590 | 0.9556 | 0.9573 | 0.9493 |
| 0.0625 | 6.0 | 20004 | 0.2912 | 0.9573 | 0.9611 | 0.9592 | 0.9520 |
| 0.0516 | 7.0 | 23338 | 0.3139 | 0.9581 | 0.9563 | 0.9572 | 0.9501 |
| 0.0388 | 8.0 | 26672 | 0.3070 | 0.9605 | 0.9600 | 0.9602 | 0.9531 |
| 0.0273 | 9.0 | 30006 | 0.3344 | 0.9607 | 0.9617 | 0.9612 | 0.9535 |
| 0.0252 | 10.0 | 33340 | 0.3547 | 0.9608 | 0.9638 | 0.9623 | 0.9554 |
| 0.0242 | 11.0 | 36674 | 0.3726 | 0.9600 | 0.9619 | 0.9610 | 0.9541 |
| 0.0119 | 12.0 | 40008 | 0.3727 | 0.9602 | 0.9623 | 0.9612 | 0.9546 |
| 0.0078 | 13.0 | 43342 | 0.3772 | 0.9617 | 0.9639 | 0.9628 | 0.9562 |
| 0.0078 | 14.0 | 46676 | 0.3904 | 0.9615 | 0.9638 | 0.9627 | 0.9560 |
| 0.0026 | 15.0 | 50010 | 0.3910 | 0.9616 | 0.9637 | 0.9627 | 0.9560 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
geshijoker/distilbert_sentiment
|
geshijoker
| 2024-08-30T17:08:28Z | 107 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-30T16:49:06Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sentiment
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1957
- Accuracy: 0.9299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2792 | 1.0 | 782 | 0.1936 | 0.9251 |
| 0.1419 | 2.0 | 1564 | 0.1957 | 0.9299 |
### Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF
|
mradermacher
| 2024-08-30T17:02:15Z | 63 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:EpistemeAI2/Fireball-Alpaca-Llama3.1-8B-Philos",
"base_model:quantized:EpistemeAI2/Fireball-Alpaca-Llama3.1-8B-Philos",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-30T11:24:20Z |
---
base_model: EpistemeAI2/Fireball-Alpaca-Llama3.1-8B-Philos
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI2/Fireball-Alpaca-Llama3.1-8B-Philos
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fireball-Alpaca-Llama3.1-8B-Philos-i1-GGUF/resolve/main/Fireball-Alpaca-Llama3.1-8B-Philos.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
netcat420/MFANNv0.20.11
|
netcat420
| 2024-08-30T16:53:41Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T15:26:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/pythia-410m-tulu-v2-mix-GGUF
|
mradermacher
| 2024-08-30T16:48:55Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:allenai/tulu-v2-sft-mixture",
"base_model:kykim0/pythia-410m-tulu-v2-mix",
"base_model:quantized:kykim0/pythia-410m-tulu-v2-mix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T16:47:18Z |
---
base_model: kykim0/pythia-410m-tulu-v2-mix
datasets:
- allenai/tulu-v2-sft-mixture
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kykim0/pythia-410m-tulu-v2-mix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.IQ3_XS.gguf) | IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.IQ3_S.gguf) | IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.IQ3_M.gguf) | IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.Q8_0.gguf) | Q8_0 | 0.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-410m-tulu-v2-mix-GGUF/resolve/main/pythia-410m-tulu-v2-mix.f16.gguf) | f16 | 0.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/pythia-1b-tulu-v2-mix-GGUF
|
mradermacher
| 2024-08-30T16:48:09Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:allenai/tulu-v2-sft-mixture",
"base_model:kykim0/pythia-1b-tulu-v2-mix",
"base_model:quantized:kykim0/pythia-1b-tulu-v2-mix",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T16:44:01Z |
---
base_model: kykim0/pythia-1b-tulu-v2-mix
datasets:
- allenai/tulu-v2-sft-mixture
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kykim0/pythia-1b-tulu-v2-mix
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q3_K_M.gguf) | Q3_K_M | 0.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q5_K_S.gguf) | Q5_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q6_K.gguf) | Q6_K | 0.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/pythia-1b-tulu-v2-mix-GGUF/resolve/main/pythia-1b-tulu-v2-mix.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LoneStriker/LongWriter-llama3.1-8b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-08-30T16:39:33Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Long Context",
"chatglm",
"en",
"zh",
"dataset:THUDM/LongWriter-6k",
"arxiv:2408.07055",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-08-30T16:36:45Z |
---
language:
- en
- zh
library_name: transformers
tags:
- Long Context
- chatglm
- llama
datasets:
- THUDM/LongWriter-6k
license: llama3.1
---
# LongWriter-llama3.1-8b
<p align="center">
🤗 <a href="https://huggingface.co/datasets/THUDM/LongWriter-6k" target="_blank">[LongWriter Dataset] </a> • 💻 <a href="https://github.com/THUDM/LongWriter" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2408.07055" target="_blank">[LongWriter Paper]</a>
</p>
LongWriter-llama3.1-8b is trained based on [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B), and is capable of generating 10,000+ words at once.
Environment: `transformers>=4.43.0`
Please ahere to the prompt template (system prompt is optional): `<<SYS>>\n{system prompt}\n<</SYS>>\n\n[INST]{query1}[/INST]{response1}[INST]{query2}[/INST]{response2}...`
A simple demo for deployment of the model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("THUDM/LongWriter-llama3.1-8b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("THUDM/LongWriter-llama3.1-8b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
model = model.eval()
query = "Write a 10000-word China travel guide"
prompt = f"[INST]{query}[/INST]"
input = tokenizer(prompt, truncation=False, return_tensors="pt").to(device)
context_length = input.input_ids.shape[-1]
output = model.generate(
**input,
max_new_tokens=32768,
num_beams=1,
do_sample=True,
temperature=0.5,
)[0]
response = tokenizer.decode(output[context_length:], skip_special_tokens=True)
print(response)
```
You can also deploy the model with [vllm](https://github.com/vllm-project/vllm), which allows 10,000+ words generation within a minute. Here is an example code:
```python
model = LLM(
model= "THUDM/LongWriter-llama3.1-8b",
dtype="auto",
trust_remote_code=True,
tensor_parallel_size=1,
max_model_len=32768,
gpu_memory_utilization=0.5,
)
tokenizer = model.get_tokenizer()
generation_params = SamplingParams(
temperature=0.5,
top_p=0.8,
top_k=50,
max_tokens=32768,
repetition_penalty=1,
)
query = "Write a 10000-word China travel guide"
prompt = f"[INST]{query}[/INST]"
input_ids = tokenizer(prompt, truncation=False, return_tensors="pt").input_ids[0].tolist()
outputs = model.generate(
sampling_params=generation_params,
prompt_token_ids=[input_ids],
)
output = outputs[0]
print(output.outputs[0].text)
```
License: [Llama-3.1 License](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
## Citation
If you find our work useful, please consider citing LongWriter:
```
@article{bai2024longwriter,
title={LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs},
author={Yushi Bai and Jiajie Zhang and Xin Lv and Linzhi Zheng and Siqi Zhu and Lei Hou and Yuxiao Dong and Jie Tang and Juanzi Li},
journal={arXiv preprint arXiv:2408.07055},
year={2024}
}
```
|
Gabrioloruioni/BertRE
|
Gabrioloruioni
| 2024-08-30T16:37:34Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-07-17T10:13:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anchovy/autogluon-chronos-t5-large
|
anchovy
| 2024-08-30T16:29:35Z | 44 | 0 | null |
[
"safetensors",
"t5",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] |
time-series-forecasting
| 2024-08-30T16:29:35Z |
---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Large)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-large",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
anchovy/autogluon-chronos-t5-mini
|
anchovy
| 2024-08-30T16:29:21Z | 149 | 0 | null |
[
"safetensors",
"t5",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] |
time-series-forecasting
| 2024-08-30T16:29:20Z |
---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Mini)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-mini",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
anchovy/autogluon-chronos-t5-tiny
|
anchovy
| 2024-08-30T16:29:03Z | 6 | 0 | null |
[
"safetensors",
"t5",
"time series",
"forecasting",
"pretrained models",
"foundation models",
"time series foundation models",
"time-series",
"time-series-forecasting",
"arxiv:2403.07815",
"arxiv:1910.10683",
"license:apache-2.0",
"region:us"
] |
time-series-forecasting
| 2024-08-30T16:29:03Z |
---
license: apache-2.0
pipeline_tag: time-series-forecasting
tags:
- time series
- forecasting
- pretrained models
- foundation models
- time series foundation models
- time-series
---
# Chronos-T5 (Tiny)
Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815).
<p align="center">
<img src="figures/main-figure.png" width="100%">
<br />
<span>
Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution.
</span>
</p>
---
## Architecture
The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters.
| Model | Parameters | Based on |
| ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
| [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
| [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
| [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
| [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
| [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) |
## Usage
To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running:
```
pip install git+https://github.com/amazon-science/chronos-forecasting.git
```
A minimal example showing how to perform inference using Chronos models:
```python
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained(
"amazon/chronos-t5-tiny",
device_map="cuda",
torch_dtype=torch.bfloat16,
)
df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv")
# context must be either a 1D tensor, a list of 1D tensors,
# or a left-padded 2D tensor with batch as the first dimension
context = torch.tensor(df["#Passengers"])
prediction_length = 12
forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length]
# visualize the forecast
forecast_index = range(len(df), len(df) + prediction_length)
low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0)
plt.figure(figsize=(8, 4))
plt.plot(df["#Passengers"], color="royalblue", label="historical data")
plt.plot(forecast_index, median, color="tomato", label="median forecast")
plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval")
plt.legend()
plt.grid()
plt.show()
```
## Citation
If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
```
@article{ansari2024chronos,
author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
title = {Chronos: Learning the Language of Time Series},
journal = {arXiv preprint arXiv:2403.07815},
year = {2024}
}
```
## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.
|
RWKV/v5-EagleX-v2-7B-HF
|
RWKV
| 2024-08-30T16:26:20Z | 86 | 13 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"rwkv5",
"text-generation",
"custom_code",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-17T21:41:27Z |
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- EleutherAI/pile
language:
- en
---

### Huggingface RWKV EagleX 7B v2 Model
> **! Important Note !**
>
> The following is the HF transformers implementation of the EagleX 7B 2.25T model. This is meant to be used with the huggingface transformers
>
> [For the full model weights on its own, to use with other RWKV libraries, refer to `RWKV/v5-EagleX-v2-7B-pth`](https://huggingface.co/RWKV/v5-EagleX-v2-7B-pth)
>
>
> This is not an instruct tune model! (soon...)
## Quickstart with the hugging face transformer library
```
model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True).to(torch.float32)
tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True)
```
## Evaluation
The following shows the progression of the model from 1.1T trained to 2.25T trained.
|Model |Eagle-7B-HF|EagleX-7B-HF-v1|EagleX-7B-HF-v2|
|----------------------|-----------|---------------|---------------|
|Param Count |7.52 B |7.52 B |7.52 B |
|Tokens Trained |1.1 T |1.7 T |2.25 T |
|avg_acc |0.4822 |0.5391 |0.5495 |
|glue (acc) |0.5752 |0.7463 |0.7439 |
|anli (acc) |0.3594 |0.4847 |0.5097 |
|mnli (acc) |0.3802 |0.7928 |0.7884 |
|mnli_mismatch (acc) |0.3687 |0.7985 |0.784 |
|swag (acc) |0.568 |0.5814 |0.5905 |
|lambada_standard (acc)|0.685 |0.686 |0.7004 |
|lambada_openai (acc) |0.7425 |0.7522 |0.7502 |
|mmlu (acc) |0.3321 |0.4014 |0.438 |
|winogrande (acc) |0.674 |0.7206 |0.7332 |
|wnli (acc) |0.4225 |0.4648 |0.493 |
|truthfulqa (acc) |0.3303 |0.3268 |0.3401 |
|logiqa (acc) |0.2458 |0.2458 |0.2458 |
|logiqa2 (acc) |0.2494 |0.2595 |0.2621 |
|sciq (acc) |0.955 |0.96 |0.93 |
|piqa (acc) |0.7704 |0.7758 |0.7764 |
|arc_easy (acc) |0.7382 |0.7555 |0.7445 |
|arc_challenge (acc) |0.3951 |0.4087 |0.4155 |
|hellaswag (acc) |0.5264 |0.5411 |0.56 |
|openbookqa (acc) |0.302 |0.296 |0.304 |
|mathqa (acc) |0.26 |0.26 |0.2593 |
|arithmetic (acc) |0.245 |0.0634 |0.1703 |
Compared against other top performing models in the same weight class.
|Model |OLMo-7B |falcon-7b |Llama-2-7b-hf|EagleX-7B-HF-v2|Mistral-7B-v0.1|
|----------------------|---------------|----------------|-------------|---------------|---------------|
|Param Count |6.89 B |6.92 B |6.74 B |7.52 B |7.24 B |
|Tokens Trained |2.5 T |1.5 T |2 T |2.25 T |2 - 7 T? |
|avg_acc |0.4578 |0.4775 |0.5045 |0.5495 |0.5676 |
|glue (acc) |0.474 |0.4578 |0.4289 |0.7439 |0.515 |
|anli (acc) |0.3478 |0.3541 |0.3697 |0.5097 |0.3803 |
|mnli (acc) |0.3294 |0.3893 |0.4269 |0.7884 |0.4542 |
|mnli_mismatch (acc) |0.3348 |0.404 |0.4395 |0.784 |0.4632 |
|swag (acc) |0.5512 |0.5685 |0.5658 |0.5905 |0.5756 |
|lambada_standard (acc)|0.6396 |0.6868 |0.6808 |0.7004 |0.6944 |
|lambada_openai (acc) |0.6872 |0.746 |0.7353 |0.7502 |0.7553 |
|mmlu (acc) |0.2812 |0.2512 |0.4077 |0.438 |0.5964 |
|winogrande (acc) |0.6725 |0.6709 |0.6914 |0.7332 |0.7364 |
|wnli (acc) |0.5775 |0.4789 |0.4648 |0.493 |0.5775 |
|truthfulqa (acc) |0.3015 |0.2826 |0.3205 |0.3401 |0.3537 |
|logiqa (acc) |0.2335 |0.2151 |0.2535 |0.2458 |0.2427 |
|logiqa2 (acc) |0.2506 |0.2252 |0.2564 |0.2621 |0.3022 |
|sciq (acc) |0.927 |0.944 |0.939 |0.93 |0.959 |
|piqa (acc) |0.7878 |0.7949 |0.7807 |0.7764 |0.8052 |
|arc_easy (acc) |0.7353 |0.7479 |0.7643 |0.7445 |0.8081 |
|arc_challenge (acc) |0.3677 |0.4027 |0.4309 |0.4155 |0.5009 |
|hellaswag (acc) |0.5572 |0.5772 |0.5713 |0.56 |0.6131 |
|openbookqa (acc) |0.292 |0.306 |0.316 |0.304 |0.33 |
|mathqa (acc) |0.26 |0.2884 |0.2801 |0.2593 |0.3554 |
|arithmetic (acc) |0.0069 |0.2367 |0.4703 |0.1703 |0.9004 |
See the following, for the full details on this model: [https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b](https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b)
#### Running on CPU via HF transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
if input:
return f"""Instruction: {instruction}
Input: {input}
Response:"""
else:
return f"""User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: {instruction}
Assistant:"""
model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True).to(torch.float32)
tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True)
text = "请介绍北京的旅游景点"
prompt = generate_prompt(text)
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(inputs["input_ids"], max_new_tokens=333, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
output:
```shell
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: 请介绍北京的旅游景点
Assistant: 北京是中国的首都,拥有众多的旅游景点,以下是其中一些著名的景点:
1. 故宫:位于北京市中心,是明清两代的皇宫,内有大量的文物和艺术品。
2. 天安门广场:是中国最著名的广场之一,是中国人民政治协商会议的旧址,也是中国人民政治协商会议的中心。
3. 颐和园:是中国古代皇家园林之一,有着悠久的历史和丰富的文化内涵。
4. 长城:是中国古代的一道长城,全长约万里,是中国最著名的旅游景点之一。
5. 北京大学:是中国著名的高等教育机构之一,有着悠久的历史和丰富的文化内涵。
6. 北京动物园:是中国最大的动物园之一,有着丰富的动物资源和丰富的文化内涵。
7. 故宫博物院:是中国最著名的博物馆之一,收藏了大量的文物和艺术品,是中国最重要的文化遗产之一。
8. 天坛:是中国古代皇家
```
#### Running on GPU via HF transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
if input:
return f"""Instruction: {instruction}
Input: {input}
Response:"""
else:
return f"""User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: {instruction}
Assistant:"""
model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True)
text = "介绍一下大熊猫"
prompt = generate_prompt(text)
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
output:
```shell
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: 介绍一下大熊猫
Assistant: 大熊猫是一种中国特有的哺乳动物,也是中国的国宝之一。它们的外貌特征是圆形的黑白相间的身体,有着黑色的毛发和白色的耳朵。大熊猫的食物主要是竹子,它们会在竹林中寻找竹子,并且会将竹子放在竹笼中进行储存。大熊猫的寿命约为20至30年,但由于栖息地的丧失和人类活动的
```
#### Batch Inference
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
instruction = instruction.strip().replace('\r\n', '\n').replace('\n\n', '\n')
input = input.strip().replace('\r\n', '\n').replace('\n\n', '\n')
if input:
return f"""Instruction: {instruction}
Input: {input}
Response:"""
else:
return f"""User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: {instruction}
Assistant:"""
model = AutoModelForCausalLM.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True).to(torch.float32)
tokenizer = AutoTokenizer.from_pretrained("RWKV/v5-Eagle-7B-HF", trust_remote_code=True)
texts = ["请介绍北京的旅游景点", "介绍一下大熊猫", "乌兰察布"]
prompts = [generate_prompt(text) for text in texts]
inputs = tokenizer(prompts, return_tensors="pt", padding=True)
outputs = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
for output in outputs:
print(tokenizer.decode(output.tolist(), skip_special_tokens=True))
```
output:
```shell
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: 请介绍北京的旅游景点
Assistant: 北京是中国的首都,拥有丰富的旅游资源和历史文化遗产。以下是一些北京的旅游景点:
1. 故宫:位于北京市中心,是明清两代的皇宫,是中国最大的古代宫殿建筑群之一。
2. 天安门广场:位于北京市中心,是中国最著名的城市广场之一,也是中国最大的城市广场。
3. 颐和
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: 介绍一下大熊猫
Assistant: 大熊猫是一种生活在中国中部地区的哺乳动物,也是中国的国宝之一。它们的外貌特征是圆形的黑白相间的身体,有着黑色的毛发和圆圆的眼睛。大熊猫是一种濒危物种,目前只有在野外的几个保护区才能看到它们的身影。大熊猫的食物主要是竹子,它们会在竹子上寻找食物,并且可以通
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: 乌兰察布
Assistant: 乌兰察布是中国新疆维吾尔自治区的一个县级市,位于新疆维吾尔自治区中部,是新疆的第二大城市。乌兰察布市是新疆的第一大城市,也是新疆的重要城市之一。乌兰察布市是新疆的经济中心,也是新疆的重要交通枢纽之一。乌兰察布市的人口约为2.5万人,其中汉族占绝大多数。乌
```
## Links
- [Our wiki](https://wiki.rwkv.com)
- [Full eval data](https://docs.google.com/spreadsheets/d/1CBLU6yKkW-8FMvGD4INO3qjeHZ0qkKnZFcM6n6lWNOs/edit#gid=912381775)
- [Recursal.AI Cloud Platform](https://recursal.ai)
- [HF Gradio Demo](https://huggingface.co/spaces/RWKV/v5-EagleX-v2-7B-gradio)
- [Blog article, detailing our model launch](https://blog.rwkv.com/p/eaglex-v2-soaring-past-llama2-7b)
## Acknowledgement
We are grateful for the help and support from the following key groups:
- [Recursal.ai](https://recursal.ai) team for financing the GPU resources, and managing the training of this foundation model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today.
- EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper
- Linux Foundation AI & Data group for supporting and hosting the RWKV project
|
Bajiyo/whisper-medium-studio-records_test
|
Bajiyo
| 2024-08-30T16:25:51Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-30T10:27:48Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-studio-records_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-studio-records_test
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0333
- Wer: 15.6507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0568 | 0.4110 | 1000 | 0.0894 | 43.3939 |
| 0.0362 | 0.8220 | 2000 | 0.0589 | 29.9079 |
| 0.0149 | 1.2330 | 3000 | 0.0463 | 22.6922 |
| 0.0117 | 1.6441 | 4000 | 0.0375 | 19.2088 |
| 0.0039 | 2.0551 | 5000 | 0.0355 | 16.1483 |
| 0.0032 | 2.4661 | 6000 | 0.0333 | 15.6507 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mradermacher/TwinLlama-3.1-8B-DPO2-GGUF
|
mradermacher
| 2024-08-30T16:17:09Z | 7 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"dpo",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T15:50:20Z |
---
base_model: mlabonne/TwinLlama-3.1-8B-DPO2
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mlabonne/TwinLlama-3.1-8B-DPO2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TwinLlama-3.1-8B-DPO2-GGUF/resolve/main/TwinLlama-3.1-8B-DPO2.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ProElectro07/BioMistral-sharded
|
ProElectro07
| 2024-08-30T16:13:23Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T16:08:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CharlesLi/OpenELM-1_1B-DPO-full-1-5
|
CharlesLi
| 2024-08-30T16:09:04Z | 133 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"openelm",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-08-30T08:21:20Z |
---
library_name: transformers
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: OpenELM-1_1B-DPO-full-1-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenELM-1_1B-DPO-full-1-5
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1836
- Rewards/chosen: -14.0
- Rewards/rejected: -17.625
- Rewards/accuracies: 0.7227
- Rewards/margins: 3.625
- Logps/rejected: -2048.0
- Logps/chosen: -1720.0
- Logits/rejected: 4.2812
- Logits/chosen: 2.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6268 | 0.1047 | 100 | 0.6449 | -0.4805 | -0.6680 | 0.6406 | 0.1885 | -356.0 | -366.0 | -9.5625 | -10.0 |
| 0.5924 | 0.2093 | 200 | 0.5985 | -1.2031 | -1.6172 | 0.6875 | 0.4199 | -450.0 | -438.0 | -12.875 | -13.125 |
| 0.6197 | 0.3140 | 300 | 0.5811 | -1.375 | -1.8438 | 0.7090 | 0.4668 | -474.0 | -456.0 | -11.75 | -12.1875 |
| 0.5968 | 0.4186 | 400 | 0.5933 | -2.3125 | -2.8438 | 0.6934 | 0.5273 | -572.0 | -548.0 | -8.5625 | -9.25 |
| 0.5854 | 0.5233 | 500 | 0.5737 | -1.7422 | -2.2812 | 0.6953 | 0.5352 | -516.0 | -492.0 | -7.7188 | -8.625 |
| 0.5524 | 0.6279 | 600 | 0.5768 | -3.0156 | -3.7031 | 0.6914 | 0.6953 | -660.0 | -620.0 | -7.0312 | -7.7188 |
| 0.5602 | 0.7326 | 700 | 0.5756 | -3.1562 | -3.9062 | 0.7168 | 0.75 | -680.0 | -636.0 | -5.125 | -6.3438 |
| 0.5581 | 0.8373 | 800 | 0.5854 | -3.3906 | -4.0312 | 0.6914 | 0.6289 | -692.0 | -656.0 | -5.0938 | -5.9688 |
| 0.5793 | 0.9419 | 900 | 0.5657 | -3.1719 | -3.9062 | 0.7207 | 0.7383 | -680.0 | -636.0 | -3.9531 | -5.0312 |
| 0.2783 | 1.0466 | 1000 | 0.6053 | -4.75 | -5.875 | 0.7188 | 1.125 | -876.0 | -792.0 | -2.2188 | -3.3594 |
| 0.2417 | 1.1512 | 1100 | 0.6139 | -4.7812 | -5.8125 | 0.7070 | 1.0469 | -872.0 | -796.0 | -2.3594 | -4.125 |
| 0.2429 | 1.2559 | 1200 | 0.5897 | -5.7188 | -6.8125 | 0.7227 | 1.0781 | -968.0 | -892.0 | -0.7188 | -2.1719 |
| 0.2508 | 1.3605 | 1300 | 0.5948 | -5.4062 | -6.4062 | 0.6914 | 1.0 | -928.0 | -860.0 | -0.0104 | -1.5156 |
| 0.2169 | 1.4652 | 1400 | 0.6104 | -5.7812 | -6.9062 | 0.7031 | 1.1016 | -976.0 | -896.0 | 0.0820 | -1.75 |
| 0.2107 | 1.5699 | 1500 | 0.6062 | -6.0625 | -7.2812 | 0.6973 | 1.1953 | -1016.0 | -924.0 | -0.4590 | -2.1719 |
| 0.2472 | 1.6745 | 1600 | 0.6158 | -5.625 | -6.7188 | 0.7070 | 1.1016 | -960.0 | -880.0 | -2.0312 | -3.9688 |
| 0.2545 | 1.7792 | 1700 | 0.6170 | -6.25 | -7.5 | 0.7031 | 1.25 | -1040.0 | -944.0 | -1.2578 | -3.2031 |
| 0.2383 | 1.8838 | 1800 | 0.6061 | -5.625 | -6.75 | 0.7012 | 1.1172 | -964.0 | -880.0 | 0.7383 | -1.1328 |
| 0.2107 | 1.9885 | 1900 | 0.6135 | -6.5 | -7.7812 | 0.7383 | 1.2578 | -1064.0 | -968.0 | 0.3027 | -1.4297 |
| 0.0186 | 2.0931 | 2000 | 0.7473 | -8.0625 | -9.875 | 0.7090 | 1.8594 | -1280.0 | -1120.0 | 2.2812 | 0.4980 |
| 0.03 | 2.1978 | 2100 | 0.8345 | -9.9375 | -12.25 | 0.7070 | 2.2812 | -1512.0 | -1312.0 | 3.2031 | 1.5938 |
| 0.0284 | 2.3025 | 2200 | 0.7741 | -9.1875 | -11.3125 | 0.7012 | 2.0781 | -1416.0 | -1240.0 | 2.7812 | 1.0156 |
| 0.0352 | 2.4071 | 2300 | 0.7983 | -9.3125 | -11.3125 | 0.7090 | 2.0156 | -1424.0 | -1248.0 | 2.6406 | 0.9961 |
| 0.0345 | 2.5118 | 2400 | 0.8249 | -9.8125 | -12.0 | 0.7266 | 2.1719 | -1488.0 | -1304.0 | 3.2656 | 1.5625 |
| 0.0192 | 2.6164 | 2500 | 0.8865 | -10.25 | -12.5625 | 0.6973 | 2.2969 | -1544.0 | -1344.0 | 3.5938 | 1.9609 |
| 0.0261 | 2.7211 | 2600 | 0.7963 | -9.1875 | -11.4375 | 0.7129 | 2.25 | -1432.0 | -1240.0 | 2.7031 | 0.8672 |
| 0.0315 | 2.8257 | 2700 | 0.7619 | -9.0 | -10.9375 | 0.7109 | 1.9766 | -1384.0 | -1216.0 | 2.8594 | 0.8320 |
| 0.0293 | 2.9304 | 2800 | 0.8241 | -9.75 | -12.0625 | 0.7070 | 2.2656 | -1496.0 | -1296.0 | 3.1719 | 1.3359 |
| 0.0071 | 3.0351 | 2900 | 0.8609 | -10.0625 | -12.5 | 0.7188 | 2.3906 | -1536.0 | -1328.0 | 3.1719 | 1.3125 |
| 0.0099 | 3.1397 | 3000 | 0.9558 | -11.5 | -14.1875 | 0.7051 | 2.6875 | -1704.0 | -1472.0 | 3.4062 | 1.6484 |
| 0.0079 | 3.2444 | 3100 | 0.9341 | -11.125 | -13.75 | 0.7090 | 2.6562 | -1664.0 | -1432.0 | 3.25 | 1.5078 |
| 0.0104 | 3.3490 | 3200 | 0.9926 | -11.9375 | -14.8125 | 0.7090 | 2.9062 | -1768.0 | -1512.0 | 3.6719 | 1.9922 |
| 0.0089 | 3.4537 | 3300 | 0.9665 | -11.9375 | -14.8125 | 0.7188 | 2.875 | -1768.0 | -1512.0 | 3.8594 | 2.2656 |
| 0.0098 | 3.5583 | 3400 | 0.9548 | -11.1875 | -13.875 | 0.7109 | 2.75 | -1680.0 | -1432.0 | 4.0 | 2.3438 |
| 0.0109 | 3.6630 | 3500 | 1.0670 | -12.5625 | -15.6875 | 0.7168 | 3.1406 | -1856.0 | -1576.0 | 4.1875 | 2.5312 |
| 0.0081 | 3.7677 | 3600 | 1.0376 | -12.375 | -15.4375 | 0.7188 | 3.0938 | -1832.0 | -1552.0 | 4.125 | 2.4844 |
| 0.0081 | 3.8723 | 3700 | 1.0725 | -13.0 | -16.25 | 0.7168 | 3.25 | -1912.0 | -1616.0 | 4.1875 | 2.5938 |
| 0.0041 | 3.9770 | 3800 | 1.1346 | -13.5 | -17.0 | 0.7188 | 3.4688 | -1984.0 | -1672.0 | 4.2188 | 2.5781 |
| 0.0036 | 4.0816 | 3900 | 1.1589 | -13.8125 | -17.375 | 0.7168 | 3.5156 | -2024.0 | -1696.0 | 4.25 | 2.625 |
| 0.0016 | 4.1863 | 4000 | 1.1790 | -14.0625 | -17.625 | 0.7168 | 3.5781 | -2048.0 | -1720.0 | 4.2812 | 2.6719 |
| 0.0037 | 4.2909 | 4100 | 1.1847 | -14.0625 | -17.625 | 0.7168 | 3.6094 | -2064.0 | -1728.0 | 4.3125 | 2.6562 |
| 0.007 | 4.3956 | 4200 | 1.1905 | -14.1875 | -17.75 | 0.7227 | 3.6406 | -2064.0 | -1736.0 | 4.3125 | 2.6719 |
| 0.0038 | 4.5003 | 4300 | 1.1835 | -14.0625 | -17.75 | 0.7207 | 3.6406 | -2064.0 | -1728.0 | 4.2812 | 2.6406 |
| 0.0093 | 4.6049 | 4400 | 1.1819 | -14.0625 | -17.625 | 0.7207 | 3.625 | -2048.0 | -1720.0 | 4.2812 | 2.625 |
| 0.006 | 4.7096 | 4500 | 1.1817 | -14.0 | -17.625 | 0.7227 | 3.6406 | -2048.0 | -1720.0 | 4.2812 | 2.6094 |
| 0.0037 | 4.8142 | 4600 | 1.1826 | -14.0 | -17.625 | 0.7227 | 3.6406 | -2048.0 | -1720.0 | 4.25 | 2.6094 |
| 0.0059 | 4.9189 | 4700 | 1.1836 | -14.0 | -17.625 | 0.7227 | 3.625 | -2048.0 | -1720.0 | 4.2812 | 2.625 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
mradermacher/NeuroCom_4B-GGUF
|
mradermacher
| 2024-08-30T16:03:11Z | 20 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:FourOhFour/NeuroCom_4B",
"base_model:quantized:FourOhFour/NeuroCom_4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T15:34:36Z |
---
base_model: FourOhFour/NeuroCom_4B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FourOhFour/NeuroCom_4B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.IQ3_XS.gguf) | IQ3_XS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.IQ3_S.gguf) | IQ3_S | 2.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.IQ3_M.gguf) | IQ3_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q3_K_M.gguf) | Q3_K_M | 2.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q3_K_L.gguf) | Q3_K_L | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.IQ4_XS.gguf) | IQ4_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q4_K_S.gguf) | Q4_K_S | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q4_K_M.gguf) | Q4_K_M | 2.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q5_K_S.gguf) | Q5_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q6_K.gguf) | Q6_K | 3.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuroCom_4B-GGUF/resolve/main/NeuroCom_4B.Q8_0.gguf) | Q8_0 | 4.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AQ206/models-that-will-literally-shatter-your-right-leg
|
AQ206
| 2024-08-30T15:59:26Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2023-06-30T16:56:10Z |
---
license: openrail
---
if you use any of these models on rvc... your right leg will absolutely shatter into pieces! if you want to keep your right leg, be cautious.
|
mrm8488/multilingual-e5-large-ft-sts-spanish-matryoshka-768-64-5e
|
mrm8488
| 2024-08-30T15:54:21Z | 74 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dataset_size:1K<n<10K",
"loss:MatryoshkaLoss",
"loss:CoSENTLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-01T22:13:40Z |
---
language: []
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dataset_size:1K<n<10K
- loss:MatryoshkaLoss
- loss:CoSENTLoss
base_model: intfloat/multilingual-e5-large
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
widget:
- source_sentence: El hombre captura una pelota
sentences:
- Un hombre lanza una pelota en el aire.
- Un hombre se encuentra tocando una flauta de madera.
- La mujer está maquillándose usando sombra de ojos.
- source_sentence: Un hombre está buscando algo.
sentences:
- En un mercado de granjeros, se encuentra un hombre.
- Se acerca a la pista un avión suizo de color blanco.
- dos chicas jóvenes se abrazan en la hierba.
- source_sentence: El avión está tocando tierra.
sentences:
- El avión animado se encuentra en proceso de aterrizaje.
- La capital de Siria fue golpeada por dos explosiones
- Violentos incidentes afectan a estudiantes chinos en Francia
- source_sentence: Un hombre saltando la cuerda.
sentences:
- Un hombre está saltando la cuerda.
- Una mujer entrena a su perro para saltar en el aire.
- Los gatitos están comiendo de los platos.
- source_sentence: tres perros gruñendo entre sí
sentences:
- Dos perros se aproximan uno al otro en el pasto.
- Una mujer sonriente brinda cariño a un pequeño bebé.
- Una mujer está montando a caballo en el campo.
pipeline_tag: sentence-similarity
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-large
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 768
type: sts-dev-768
metrics:
- type: pearson_cosine
value: 0.8279951103268512
name: Pearson Cosine
- type: spearman_cosine
value: 0.8342643795984531
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8228439538329566
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.834870903153992
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8231076969394738
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8349270059177344
name: Spearman Euclidean
- type: pearson_dot
value: 0.8196281042113861
name: Pearson Dot
- type: spearman_dot
value: 0.8248683461954115
name: Spearman Dot
- type: pearson_max
value: 0.8279951103268512
name: Pearson Max
- type: spearman_max
value: 0.8349270059177344
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 512
type: sts-dev-512
metrics:
- type: pearson_cosine
value: 0.8236357426336446
name: Pearson Cosine
- type: spearman_cosine
value: 0.8332692872015282
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8217552769156274
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8331746060276878
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8217859136681092
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8334069456110773
name: Spearman Euclidean
- type: pearson_dot
value: 0.8101789790612713
name: Pearson Dot
- type: spearman_dot
value: 0.8179205607773823
name: Spearman Dot
- type: pearson_max
value: 0.8236357426336446
name: Pearson Max
- type: spearman_max
value: 0.8334069456110773
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 256
type: sts-dev-256
metrics:
- type: pearson_cosine
value: 0.816222860848086
name: Pearson Cosine
- type: spearman_cosine
value: 0.8303708513421737
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8178715987143794
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8301047046554985
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8183826652089494
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8301804247624904
name: Spearman Euclidean
- type: pearson_dot
value: 0.7878741921967743
name: Pearson Dot
- type: spearman_dot
value: 0.7904844114269662
name: Spearman Dot
- type: pearson_max
value: 0.8183826652089494
name: Pearson Max
- type: spearman_max
value: 0.8303708513421737
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 128
type: sts-dev-128
metrics:
- type: pearson_cosine
value: 0.794202606017138
name: Pearson Cosine
- type: spearman_cosine
value: 0.8198385906414491
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8088714046889546
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8222921243120748
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8092312345267045
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8220266161646009
name: Spearman Euclidean
- type: pearson_dot
value: 0.7341586721030032
name: Pearson Dot
- type: spearman_dot
value: 0.7351749794310246
name: Spearman Dot
- type: pearson_max
value: 0.8092312345267045
name: Pearson Max
- type: spearman_max
value: 0.8222921243120748
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev 64
type: sts-dev-64
metrics:
- type: pearson_cosine
value: 0.7727295051414095
name: Pearson Cosine
- type: spearman_cosine
value: 0.8076629783565549
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7976419723073269
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8147883308842346
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7979124462870892
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8123832197697319
name: Spearman Euclidean
- type: pearson_dot
value: 0.6725844492342726
name: Pearson Dot
- type: spearman_dot
value: 0.6673162832940408
name: Spearman Dot
- type: pearson_max
value: 0.7979124462870892
name: Pearson Max
- type: spearman_max
value: 0.8147883308842346
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.8630482725201897
name: Pearson Cosine
- type: spearman_cosine
value: 0.8813284718659181
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8770818288812614
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8810971983428288
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8770132070253477
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8812162173545179
name: Spearman Euclidean
- type: pearson_dot
value: 0.8581811981775829
name: Pearson Dot
- type: spearman_dot
value: 0.8707402246720045
name: Spearman Dot
- type: pearson_max
value: 0.8770818288812614
name: Pearson Max
- type: spearman_max
value: 0.8813284718659181
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.8589909139210625
name: Pearson Cosine
- type: spearman_cosine
value: 0.8799604919891442
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8744468387217347
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8791142262015441
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8747974723064821
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8795698184784307
name: Spearman Euclidean
- type: pearson_dot
value: 0.8464185524060444
name: Pearson Dot
- type: spearman_dot
value: 0.8549652098582826
name: Spearman Dot
- type: pearson_max
value: 0.8747974723064821
name: Pearson Max
- type: spearman_max
value: 0.8799604919891442
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.8528262537030415
name: Pearson Cosine
- type: spearman_cosine
value: 0.8762917275750132
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8715060008387856
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8780718380107112
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.87251419758469
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8788770265821976
name: Spearman Euclidean
- type: pearson_dot
value: 0.801980870958869
name: Pearson Dot
- type: spearman_dot
value: 0.8007112694661982
name: Spearman Dot
- type: pearson_max
value: 0.87251419758469
name: Pearson Max
- type: spearman_max
value: 0.8788770265821976
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.8392066286150661
name: Pearson Cosine
- type: spearman_cosine
value: 0.8692426944903685
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8631603748425567
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8715673768304316
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.8643871758114816
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8724091426441261
name: Spearman Euclidean
- type: pearson_dot
value: 0.7461565194503229
name: Pearson Dot
- type: spearman_dot
value: 0.7403017354497338
name: Spearman Dot
- type: pearson_max
value: 0.8643871758114816
name: Pearson Max
- type: spearman_max
value: 0.8724091426441261
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.8213671607347727
name: Pearson Cosine
- type: spearman_cosine
value: 0.8621003145087452
name: Spearman Cosine
- type: pearson_manhattan
value: 0.8530869243121955
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.8631973638935834
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.854140567169475
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.8632627342101252
name: Spearman Euclidean
- type: pearson_dot
value: 0.6853599968011839
name: Pearson Dot
- type: spearman_dot
value: 0.6726454086764928
name: Spearman Dot
- type: pearson_max
value: 0.854140567169475
name: Pearson Max
- type: spearman_max
value: 0.8632627342101252
name: Spearman Max
---
# SentenceTransformer based on intfloat/multilingual-e5-large
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on the clibrain/stsb_multi_es_aug_gpt3.5-turbo_2 dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision ab10c1a7f42e74530fe7ae5be82e6d4f11a719eb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- stsb_multi_es_aug
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mrm8488/multilingual-e5-large-ft-sts-spanish-matryoshka-768-64-5e")
# Run inference
sentences = [
'tres perros gruñendo entre sí',
'Dos perros se aproximan uno al otro en el pasto.',
'Una mujer sonriente brinda cariño a un pequeño bebé.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.828 |
| **spearman_cosine** | **0.8343** |
| pearson_manhattan | 0.8228 |
| spearman_manhattan | 0.8349 |
| pearson_euclidean | 0.8231 |
| spearman_euclidean | 0.8349 |
| pearson_dot | 0.8196 |
| spearman_dot | 0.8249 |
| pearson_max | 0.828 |
| spearman_max | 0.8349 |
#### Semantic Similarity
* Dataset: `sts-dev-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8236 |
| **spearman_cosine** | **0.8333** |
| pearson_manhattan | 0.8218 |
| spearman_manhattan | 0.8332 |
| pearson_euclidean | 0.8218 |
| spearman_euclidean | 0.8334 |
| pearson_dot | 0.8102 |
| spearman_dot | 0.8179 |
| pearson_max | 0.8236 |
| spearman_max | 0.8334 |
#### Semantic Similarity
* Dataset: `sts-dev-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8162 |
| **spearman_cosine** | **0.8304** |
| pearson_manhattan | 0.8179 |
| spearman_manhattan | 0.8301 |
| pearson_euclidean | 0.8184 |
| spearman_euclidean | 0.8302 |
| pearson_dot | 0.7879 |
| spearman_dot | 0.7905 |
| pearson_max | 0.8184 |
| spearman_max | 0.8304 |
#### Semantic Similarity
* Dataset: `sts-dev-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7942 |
| **spearman_cosine** | **0.8198** |
| pearson_manhattan | 0.8089 |
| spearman_manhattan | 0.8223 |
| pearson_euclidean | 0.8092 |
| spearman_euclidean | 0.822 |
| pearson_dot | 0.7342 |
| spearman_dot | 0.7352 |
| pearson_max | 0.8092 |
| spearman_max | 0.8223 |
#### Semantic Similarity
* Dataset: `sts-dev-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7727 |
| **spearman_cosine** | **0.8077** |
| pearson_manhattan | 0.7976 |
| spearman_manhattan | 0.8148 |
| pearson_euclidean | 0.7979 |
| spearman_euclidean | 0.8124 |
| pearson_dot | 0.6726 |
| spearman_dot | 0.6673 |
| pearson_max | 0.7979 |
| spearman_max | 0.8148 |
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.863 |
| **spearman_cosine** | **0.8813** |
| pearson_manhattan | 0.8771 |
| spearman_manhattan | 0.8811 |
| pearson_euclidean | 0.877 |
| spearman_euclidean | 0.8812 |
| pearson_dot | 0.8582 |
| spearman_dot | 0.8707 |
| pearson_max | 0.8771 |
| spearman_max | 0.8813 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:---------|
| pearson_cosine | 0.859 |
| **spearman_cosine** | **0.88** |
| pearson_manhattan | 0.8744 |
| spearman_manhattan | 0.8791 |
| pearson_euclidean | 0.8748 |
| spearman_euclidean | 0.8796 |
| pearson_dot | 0.8464 |
| spearman_dot | 0.855 |
| pearson_max | 0.8748 |
| spearman_max | 0.88 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8528 |
| **spearman_cosine** | **0.8763** |
| pearson_manhattan | 0.8715 |
| spearman_manhattan | 0.8781 |
| pearson_euclidean | 0.8725 |
| spearman_euclidean | 0.8789 |
| pearson_dot | 0.802 |
| spearman_dot | 0.8007 |
| pearson_max | 0.8725 |
| spearman_max | 0.8789 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8392 |
| **spearman_cosine** | **0.8692** |
| pearson_manhattan | 0.8632 |
| spearman_manhattan | 0.8716 |
| pearson_euclidean | 0.8644 |
| spearman_euclidean | 0.8724 |
| pearson_dot | 0.7462 |
| spearman_dot | 0.7403 |
| pearson_max | 0.8644 |
| spearman_max | 0.8724 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8214 |
| **spearman_cosine** | **0.8621** |
| pearson_manhattan | 0.8531 |
| spearman_manhattan | 0.8632 |
| pearson_euclidean | 0.8541 |
| spearman_euclidean | 0.8633 |
| pearson_dot | 0.6854 |
| spearman_dot | 0.6726 |
| pearson_max | 0.8541 |
| spearman_max | 0.8633 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### stsb_multi_es_aug
* Dataset: stsb_multi_es_aug
* Size: 2,697 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 22.25 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 22.01 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 2.67</li><li>max: 5.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------|:-------------------------------|
| <code>El pájaro de tamaño reducido se posó con delicadeza en una rama cubierta de escarcha.</code> | <code>Un ave de color amarillo descansaba tranquilamente en una rama.</code> | <code>3.200000047683716</code> |
| <code>Una chica está tocando la flauta en un parque.</code> | <code>Un grupo de músicos está tocando en un escenario al aire libre.</code> | <code>1.286</code> |
| <code>La aclamada escritora británica, Doris Lessing, galardonada con el premio Nobel, fallece</code> | <code>La destacada autora británica, Doris Lessing, reconocida con el prestigioso Premio Nobel, muere</code> | <code>4.199999809265137</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CoSENTLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### stsb_multi_es_aug
* Dataset: stsb_multi_es_aug
* Size: 697 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 8 tokens</li><li>mean: 22.76 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 22.26 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 2.3</li><li>max: 5.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------|
| <code>Un incendio ocurrido en un hospital psiquiátrico ruso resultó en la trágica muerte de 38 personas.</code> | <code>Se teme que el incendio en un hospital psiquiátrico ruso cause la pérdida de la vida de 38 individuos.</code> | <code>4.199999809265137</code> |
| <code>"Street dijo que el otro individuo a veces se siente avergonzado de su fiesta, lo cual provoca risas en la multitud"</code> | <code>"A veces, el otro tipo se encuentra avergonzado de su fiesta y no se le puede culpar."</code> | <code>3.5</code> |
| <code>El veterano diplomático de Malasia tuvo un encuentro con Suu Kyi el miércoles en la casa del lago en Yangon donde permanece bajo arresto domiciliario.</code> | <code>Razali Ismail tuvo una reunión de 90 minutos con Suu Kyi, quien ganó el Premio Nobel de la Paz en 1991, en su casa del lago donde está recluida.</code> | <code>3.691999912261963</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "CoSENTLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | sts-dev-128_spearman_cosine | sts-dev-256_spearman_cosine | sts-dev-512_spearman_cosine | sts-dev-64_spearman_cosine | sts-dev-768_spearman_cosine | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:-------:|:---------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.5917 | 100 | 21.7032 | 21.7030 | 0.8030 | 0.8124 | 0.8205 | 0.7839 | 0.8215 | - | - | - | - | - |
| 1.1834 | 200 | 21.4019 | 24.0898 | 0.7839 | 0.7972 | 0.8038 | 0.7680 | 0.8062 | - | - | - | - | - |
| 1.7751 | 300 | 21.2168 | 22.5421 | 0.7909 | 0.8027 | 0.8058 | 0.7786 | 0.8068 | - | - | - | - | - |
| 2.3669 | 400 | 20.7049 | 23.6522 | 0.7938 | 0.8049 | 0.8108 | 0.7873 | 0.8123 | - | - | - | - | - |
| 2.9586 | 500 | 20.5077 | 23.6100 | 0.8017 | 0.8116 | 0.8155 | 0.7893 | 0.8185 | - | - | - | - | - |
| 3.5503 | 600 | 19.2725 | 24.7539 | 0.8133 | 0.8254 | 0.8291 | 0.8032 | 0.8314 | - | - | - | - | - |
| 4.1420 | 700 | 19.0841 | 26.5286 | 0.8210 | 0.8298 | 0.8333 | 0.8102 | 0.8333 | - | - | - | - | - |
| 4.7337 | 800 | 18.6847 | 26.8158 | 0.8198 | 0.8304 | 0.8333 | 0.8077 | 0.8343 | - | - | - | - | - |
| 5.0 | 845 | - | - | - | - | - | - | - | 0.8692 | 0.8763 | 0.8800 | 0.8621 | 0.8813 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.0
- Transformers: 4.41.1
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Westlake-Scribe-GGUF
|
mradermacher
| 2024-08-30T15:54:08Z | 46 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DazzlingXeno/Westlake-Scribe",
"base_model:quantized:DazzlingXeno/Westlake-Scribe",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T10:45:30Z |
---
base_model: DazzlingXeno/Westlake-Scribe
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DazzlingXeno/Westlake-Scribe
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Westlake-Scribe-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q2_K.gguf) | Q2_K | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.IQ3_XS.gguf) | IQ3_XS | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q3_K_S.gguf) | Q3_K_S | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.IQ3_S.gguf) | IQ3_S | 7.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.IQ3_M.gguf) | IQ3_M | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q3_K_M.gguf) | Q3_K_M | 8.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q3_K_L.gguf) | Q3_K_L | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.IQ4_XS.gguf) | IQ4_XS | 9.8 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q4_K_S.gguf) | Q4_K_S | 10.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q4_K_M.gguf) | Q4_K_M | 10.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q5_K_S.gguf) | Q5_K_S | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q5_K_M.gguf) | Q5_K_M | 12.8 | |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q6_K.gguf) | Q6_K | 14.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Westlake-Scribe-GGUF/resolve/main/Westlake-Scribe.Q8_0.gguf) | Q8_0 | 19.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bif02/eb_t5_base_16
|
bif02
| 2024-08-30T15:53:17Z | 5 | 0 | null |
[
"tensorboard",
"safetensors",
"t5",
"generated_from_trainer",
"base_model:bif02/eb_t5_base_16",
"base_model:finetune:bif02/eb_t5_base_16",
"license:apache-2.0",
"region:us"
] | null | 2024-08-25T12:11:37Z |
---
license: apache-2.0
base_model: bif02/eb_t5_base_16
tags:
- generated_from_trainer
model-index:
- name: eb_t5_base_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eb_t5_base_16
This model is a fine-tuned version of [bif02/eb_t5_base_16](https://huggingface.co/bif02/eb_t5_base_16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0031 | 1.0 | 1107 | 0.0058 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
leap-llm/Meta-Llama-3-8B-Instruct-sft-alfworld-iter2-3e-5
|
leap-llm
| 2024-08-30T15:42:35Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T15:36:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BishFish/DialoGPT-small-edward
|
BishFish
| 2024-08-30T15:39:26Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-29T14:08:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
julienkay/stable-diffusion-v1-5
|
julienkay
| 2024-08-30T15:39:14Z | 6 | 1 |
diffusers
|
[
"diffusers",
"onnx",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"diffusers:OnnxStableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T12:52:44Z |
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
library_name: diffusers
---
This is a reupload of the ONNX version of the stable-diffusion-v1-5 model previously hosted in the runwayml/stable-diffusion-v1-5 repository that is now offline.
I'm mostly reuploading this for my own experiments on running diffusion models with Unity Sentis, but the model should still work with any onnx-compatible library of course.
See [com.doji.diffusers](https://github.com/julienkay/com.doji.diffusers) for details.
Here is the original model card:
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this with the [🧨Diffusers library](https://github.com/huggingface/diffusers).
### Diffusers usage
```py
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained(
"benjamin-paine/stable-diffusion-v1-5",
torch_dtype=torch.float16
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/benjamin-paine/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/benjamin-paine/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit
|
Agnuxo
| 2024-08-30T15:34:12Z | 346 | 1 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"llama",
"text-generation-inference",
"transformers",
"unsloth",
"mistral",
"en",
"es",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:jtatman/python-code-dataset-500k",
"dataset:flytech/python-codes-25k",
"dataset:Vezora/Tested-143k-Python-Alpaca",
"dataset:codefuse-ai/CodeExercise-Python-27k",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mlabonne/Evol-Instruct-Python-26k",
"base_model:Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit",
"base_model:adapter:Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-24T13:50:42Z |
---
base_model: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_16bit
language:
- en
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
datasets:
- iamtarun/python_code_instructions_18k_alpaca
- jtatman/python-code-dataset-500k
- flytech/python-codes-25k
- Vezora/Tested-143k-Python-Alpaca
- codefuse-ai/CodeExercise-Python-27k
- Vezora/Tested-22k-Python-Alpaca
- mlabonne/Evol-Instruct-Python-26k
metrics:
- code_eval
library_name: adapter-transformers
---
# Uploaded model
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
- **Developed by:** Agnuxo(https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Agnuxo/Mistral-NeMo-Minitron-8B-Alpaca-CODE-Python-GGUF-16bit
|
Agnuxo
| 2024-08-30T15:32:15Z | 29 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"mistral",
"text-generation-inference",
"transformers",
"unsloth",
"en",
"es",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:jtatman/python-code-dataset-500k",
"dataset:flytech/python-codes-25k",
"dataset:Vezora/Tested-143k-Python-Alpaca",
"dataset:codefuse-ai/CodeExercise-Python-27k",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mlabonne/Evol-Instruct-Python-26k",
"base_model:Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal",
"base_model:adapter:Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-23T12:02:41Z |
---
base_model: Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
language:
- en
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
datasets:
- iamtarun/python_code_instructions_18k_alpaca
- jtatman/python-code-dataset-500k
- flytech/python-codes-25k
- Vezora/Tested-143k-Python-Alpaca
- codefuse-ai/CodeExercise-Python-27k
- Vezora/Tested-22k-Python-Alpaca
- mlabonne/Evol-Instruct-Python-26k
metrics:
- code_eval
library_name: adapter-transformers
---
# Uploaded model
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
- **Developed by:** Agnuxo(https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abhinayadutta/deepseek-coder-1.3b-base-int8-plsql-codegen
|
abhinayadutta
| 2024-08-30T15:31:57Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-07-19T10:12:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alpacaml/stable-diffusion-v1-5
|
alpacaml
| 2024-08-30T15:26:28Z | 121 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T15:26:28Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: >-
This model is open access and available to all, with a CreativeML OpenRAIL-M
license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or
harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use
them and are accountable for their use which must not go against the
provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as
a service. If you do, please be aware you have to include the same use
restrictions as the ones in the license and share a copy of the CreativeML
OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here:
https://huggingface.co/spaces/CompVis/stable-diffusion-license
extra_gated_heading: Please read the LICENSE to access this model
duplicated_from: runwayml/stable-diffusion-v1-5
---
# Stable Diffusion v1-5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and the [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion).
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Original GitHub Repository
1. Download the weights
- [v1-5-pruned-emaonly.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) - 4.27GB, ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.ckpt](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt) - 7.7GB, ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion).
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
Neuranest/Phi-3.5-mini-instruct-hfc-gguf
|
Neuranest
| 2024-08-30T15:21:19Z | 6 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T15:15:49Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Neuranest
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_4bit
|
Agnuxo
| 2024-08-30T15:14:37Z | 18 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"llama",
"text-generation-inference",
"transformers",
"unsloth",
"en",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:jtatman/python-code-dataset-500k",
"dataset:flytech/python-codes-25k",
"dataset:Vezora/Tested-143k-Python-Alpaca",
"dataset:codefuse-ai/CodeExercise-Python-27k",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mlabonne/Evol-Instruct-Python-26k",
"base_model:Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit",
"base_model:adapter:Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-24T13:57:37Z |
---
base_model: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
datasets:
- iamtarun/python_code_instructions_18k_alpaca
- jtatman/python-code-dataset-500k
- flytech/python-codes-25k
- Vezora/Tested-143k-Python-Alpaca
- codefuse-ai/CodeExercise-Python-27k
- Vezora/Tested-22k-Python-Alpaca
- mlabonne/Evol-Instruct-Python-26k
metrics:
- code_eval
library_name: adapter-transformers
---
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Agnuxo/Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit
|
Agnuxo
| 2024-08-30T15:11:44Z | 234 | 3 |
adapter-transformers
|
[
"adapter-transformers",
"gguf",
"mistral",
"text-generation-inference",
"transformers",
"unsloth",
"en",
"es",
"dataset:iamtarun/python_code_instructions_18k_alpaca",
"dataset:jtatman/python-code-dataset-500k",
"dataset:flytech/python-codes-25k",
"dataset:Vezora/Tested-143k-Python-Alpaca",
"base_model:Agnuxo/Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit",
"base_model:adapter:Agnuxo/Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-08-24T16:38:03Z |
---
base_model: Agnuxo/Mamba-Codestral-7B-v0.1-instruct-python_coding_assistant-GGUF_4bit
language:
- en
- es
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
datasets:
- iamtarun/python_code_instructions_18k_alpaca
- jtatman/python-code-dataset-500k
- flytech/python-codes-25k
- Vezora/Tested-143k-Python-Alpaca
metrics:
- code_eval
library_name: adapter-transformers
---
# Uploaded model
- **Developed by:** Agnuxo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Neuranest/Phi-3.5-mini-instruct-hfc-16bit
|
Neuranest
| 2024-08-30T15:10:40Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-30T15:08:33Z |
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Neuranest
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
v000000/L3.1-Niitorm-8B-LATCOSx2-Version-GGUFs-IMATRIX
|
v000000
| 2024-08-30T14:57:56Z | 125 | 4 | null |
[
"gguf",
"llama",
"merge",
"llama-cpp",
"base_model:v000000/L3.1-Niitorm-8B-LATCOSx2",
"base_model:quantized:v000000/L3.1-Niitorm-8B-LATCOSx2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-29T21:10:49Z |
---
base_model: v000000/L3.1-Niitorm-8B-LATCOSx2
tags:
- llama
- merge
- llama-cpp
---
# Llama-3.1-Niitorm-8B-LATCOSx2

# Ordered by quality:
* q8_0 imatrix
* q8_0
* q6_k imatrix
* q6_k
* q5_k_m imatrix
* q5_k_s imatrix
* q4_k_m imatrix
* q4_k_s imatrix
* iq4_xs imatrix
* q4_0_4_8 imatrix arm
* q4_0_4_4 imatrix arm
This is a test *RP* model, <b>"v000000/L3.1-Niitorm-8B-t0.0001"</b> but merged one extra time with <b>"akjindal53244/Llama-3.1-Storm-8B"</b>. Using a new merging algorithm I wrote <b>"LATCOS"</b>, which is non linear interpolation and cosine vector similarity between tensors in both magnitude and direction.
This attempts to find the smoothest possible interpolation and make them work more seamlessly together by taking into account the vector direction where both models agree. The model seems a lot smarter even though it's just a bit more of storm, but also more compliant which could be a negative since it's less "dynamic".
<i>imatrix data randomized bartowski, kalomeze, rp snippets, working gpt4 code, human messaging, story</i>
|
lmstudio-community/c4ai-command-r-08-2024-GGUF
|
lmstudio-community
| 2024-08-30T14:56:25Z | 528 | 22 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"base_model:CohereForAI/c4ai-command-r-08-2024",
"base_model:quantized:CohereForAI/c4ai-command-r-08-2024",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-08-30T14:34:31Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
license: cc-by-nc-4.0
library_name: transformers
extra_gated_prompt: "By submitting this form, you agree to the [License Agreement](https://cohere.com/c4ai-cc-by-nc-license) and acknowledge that the information you provide will be collected, used, and shared in accordance with Cohere’s [Privacy Policy]( https://cohere.com/privacy)."
extra_gated_fields:
Name: text
Affiliation: text
Country:
type: select
options:
- Aruba
- Afghanistan
- Angola
- Anguilla
- Åland Islands
- Albania
- Andorra
- United Arab Emirates
- Argentina
- Armenia
- American Samoa
- Antarctica
- French Southern Territories
- Antigua and Barbuda
- Australia
- Austria
- Azerbaijan
- Burundi
- Belgium
- Benin
- Bonaire Sint Eustatius and Saba
- Burkina Faso
- Bangladesh
- Bulgaria
- Bahrain
- Bahamas
- Bosnia and Herzegovina
- Saint Barthélemy
- Belarus
- Belize
- Bermuda
- Plurinational State of Bolivia
- Brazil
- Barbados
- Brunei-Darussalam
- Bhutan
- Bouvet-Island
- Botswana
- Central African Republic
- Canada
- Cocos (Keeling) Islands
- Switzerland
- Chile
- China
- Côte-dIvoire
- Cameroon
- Democratic Republic of the Congo
- Cook Islands
- Colombia
- Comoros
- Cabo Verde
- Costa Rica
- Cuba
- Curaçao
- Christmas Island
- Cayman Islands
- Cyprus
- Czechia
- Germany
- Djibouti
- Dominica
- Denmark
- Dominican Republic
- Algeria
- Ecuador
- Egypt
- Eritrea
- Western Sahara
- Spain
- Estonia
- Ethiopia
- Finland
- Fiji
- Falkland Islands (Malvinas)
- France
- Faroe Islands
- Federated States of Micronesia
- Gabon
- United Kingdom
- Georgia
- Guernsey
- Ghana
- Gibraltar
- Guinea
- Guadeloupe
- Gambia
- Guinea Bissau
- Equatorial Guinea
- Greece
- Grenada
- Greenland
- Guatemala
- French Guiana
- Guam
- Guyana
- Hong Kong
- Heard Island and McDonald Islands
- Honduras
- Croatia
- Haiti
- Hungary
- Indonesia
- Isle of Man
- India
- British Indian Ocean Territory
- Ireland
- Islamic Republic of Iran
- Iraq
- Iceland
- Israel
- Italy
- Jamaica
- Jersey
- Jordan
- Japan
- Kazakhstan
- Kenya
- Kyrgyzstan
- Cambodia
- Kiribati
- Saint-Kitts-and-Nevis
- South Korea
- Kuwait
- Lao-Peoples-Democratic-Republic
- Lebanon
- Liberia
- Libya
- Saint-Lucia
- Liechtenstein
- Sri Lanka
- Lesotho
- Lithuania
- Luxembourg
- Latvia
- Macao
- Saint Martin (French-part)
- Morocco
- Monaco
- Republic of Moldova
- Madagascar
- Maldives
- Mexico
- Marshall Islands
- North Macedonia
- Mali
- Malta
- Myanmar
- Montenegro
- Mongolia
- Northern Mariana Islands
- Mozambique
- Mauritania
- Montserrat
- Martinique
- Mauritius
- Malawi
- Malaysia
- Mayotte
- Namibia
- New Caledonia
- Niger
- Norfolk Island
- Nigeria
- Nicaragua
- Niue
- Netherlands
- Norway
- Nepal
- Nauru
- New Zealand
- Oman
- Pakistan
- Panama
- Pitcairn
- Peru
- Philippines
- Palau
- Papua New Guinea
- Poland
- Puerto Rico
- North Korea
- Portugal
- Paraguay
- State of Palestine
- French Polynesia
- Qatar
- Réunion
- Romania
- Russia
- Rwanda
- Saudi Arabia
- Sudan
- Senegal
- Singapore
- South Georgia and the South Sandwich Islands
- Saint Helena Ascension and Tristan da Cunha
- Svalbard and Jan Mayen
- Solomon Islands
- Sierra Leone
- El Salvador
- San Marino
- Somalia
- Saint Pierre and Miquelon
- Serbia
- South Sudan
- Sao Tome and Principe
- Suriname
- Slovakia
- Slovenia
- Sweden
- Eswatini
- Sint Maarten (Dutch-part)
- Seychelles
- Syrian Arab Republic
- Turks and Caicos Islands
- Chad
- Togo
- Thailand
- Tajikistan
- Tokelau
- Turkmenistan
- Timor Leste
- Tonga
- Trinidad and Tobago
- Tunisia
- Turkey
- Tuvalu
- Taiwan
- United Republic of Tanzania
- Uganda
- Ukraine
- United States Minor Outlying Islands
- Uruguay
- United-States
- Uzbekistan
- Holy See (Vatican City State)
- Saint Vincent and the Grenadines
- Bolivarian Republic of Venezuela
- Virgin Islands British
- Virgin Islands U.S.
- VietNam
- Vanuatu
- Wallis and Futuna
- Samoa
- Yemen
- South Africa
- Zambia
- Zimbabwe
Receive email updates on C4AI and Cohere research, events, products and services?:
type: select
options:
- Yes
- No
I agree to use this model for non-commercial use ONLY: checkbox
quantized_by: bartowski
pipeline_tag: text-generation
base_model: CohereForAI/c4ai-command-r-08-2024
lm_studio:
param_count: 35b
use_case: general
release_date: 30-08-2024
model_creator: CohereForAI
prompt_template: cohere_command_r
base_model: Cohere
system_prompt: You are a large language model called Command R built by the company Cohere. You act as a brilliant, sophisticated, AI-assistant chatbot trained to assist human users by providing thorough responses.
original_repo: CohereForAI/c4ai-command-r-08-2024
---
## 💫 Community Model> C4AI Command R 08-2024 by Cohere For AI
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [CohereForAI](https://huggingface.co/CohereForAI)<br>
**Original model**: [c4ai-command-r-08-2024](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3634](https://github.com/ggerganov/llama.cpp/releases/tag/b3634)<br>
## Model Summary:
C4AI Command R 08-2024 is an update to the originally released 35B paramater Command R. The original Command R model received sweeping praise for its incredible RAG and multilingual abilities, and this model is no different.<br>
Not for commercial use, must adhere to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
## Prompt Template:
Choose the `Cohere Command R` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{prompt}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
```
This model also supports tool use and RAG prompt formats. For details on formatting for those use cases, view [tool use here](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024#tool-use--agent-capabilities) and [RAG capabilities here](https://huggingface.co/CohereForAI/c4ai-command-r-08-2024#grounded-generation-and-rag-capabilities)
## Technical Details
C4AI Command R 08-2024 has been trained on 23 languages (English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Simplified Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian).
Due to this multilingual training, it excels in multilingual tasks.
Command R 08-2024 supports a context length of 128K.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
### Terms of Use (directly from Cohere For AI)
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant 35 billion parameter model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
|
U4R/StructTable-base
|
U4R
| 2024-08-30T14:53:12Z | 42,463 | 7 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"image-to-text",
"en",
"zh",
"arxiv:2406.11633",
"license:apache-2.0",
"region:us"
] |
image-to-text
| 2024-07-26T13:34:55Z |
---
language:
- en
- zh
pipeline_tag: image-to-text
inference: false
license: apache-2.0
---
<div align="center">
<h1>StructEqTable-Deploy: A High-efficiency Open-source Toolkit for Table-to-Latex Transformation</h1>
[[ Github Repo ]](https://github.com/UniModal4Reasoning/StructEqTable-Deploy) [[ Related Paper ]](https://arxiv.org/abs/2406.11633) [[ Website ]](https://unimodal4reasoning.github.io/DocGenome_page/)
[[ Dataset (Google Drive)]](https://drive.google.com/drive/folders/1OIhnuQdIjuSSDc_QL2nP4NwugVDgtItD) [[ Dataset (Hugging Face) ]](https://huggingface.co/datasets/U4R/DocGenome/tree/main) [[Models 🤗(Hugging Face)]](https://huggingface.co/U4R/StructTable-base/tree/main)
</div>
Welcome to the official repository of StructEqTable-Deploy, a solution that converts images of Table into LaTeX, powered by scalable data from [DocGenome benchmark](https://unimodal4reasoning.github.io/DocGenome_page/).
## Overview
Table is an effective way to represent structured data in scientific publications, financial statements, invoices, web pages, and many other scenarios. Extracting tabular data from a visual table image and performing the downstream reasoning tasks according to the extracted data is challenging, mainly due to that tables often present complicated column and row headers with spanning cell operation. To address these challenges, we present TableX, a large-scale multi-modal table benchmark extracted from [DocGenome benchmark](https://unimodal4reasoning.github.io/DocGenome_page/) for table pre-training, comprising more than 2 million high-quality Image-LaTeX pair data covering 156 disciplinary classes. Besides, benefiting from such large-scale data, we train an end-to-end model, StructEqTable, which provides the capability to precisely obtain the corresponding LaTeX description from a visual table image and perform multiple table-related reasoning tasks, including structural extraction and question answering, broadening its application scope and potential.
## Changelog
Tips: Current version of StructEqTable is able to process table images from scientific documents such as arXiv, Scihub papers. Times New Roman And Songti(宋体) are main fonts used in table image, other fonts may decrease the accuracy of the model's output.
- **[2024/8/22] 🔥 We have released our [latest model](https://huggingface.co/U4R/StructTable-base/tree/v0.2), fine-tuned on the DocGenome dataset. This version features improved inference speed and robustness, achieved through data augmentation and reduced image token num.**
- [2024/8/08] We have released the TensorRT accelerated version, which only takes about 1 second for most images on GPU A100. Please follow the tutorial to install the environment and compile the model weights.
- [2024/7/30] We have released the first version of StructEqTable.
## TODO
- [x] Release inference code and checkpoints of StructEqTable.
- [x] Support Chinese version of StructEqTable.
- [x] Accelerated version of StructEqTable using TensorRT-LLM.
- [ ] Expand more domains of table image to improve the model's general capabilities.
- [ ] Release our table pre-training and fine-tuning code
## Efficient Inference
Our model now supports TensorRT-LLM deployment, achieving a 10x or more speedup in during inference.
Please refer to [GETTING_STARTED.md](docs/GETTING_STARTED.md) to learn how to depoly.
## Installation
``` bash
conda create -n structeqtable python>=3.10
conda activate structeqtable
# Install from Source code (Suggested)
git clone https://github.com/UniModal4Reasoning/StructEqTable-Deploy.git
cd StructEqTable-Deploy
python setup develop
# or Install from Github repo
pip install "git+https://github.com/UniModal4Reasoning/StructEqTable-Deploy.git"
# or Install from PyPI
pip install struct-eqtable==0.1.0
```
## Model Zoo
| Model | Image Token Num | Model Size | Training Data | Data Augmentation | TensorRT | HuggingFace |
|---------------------|---------------------|------------|------------------|-------------------|----------|-------------------|
| StructEqTable-base | 4096 | ~300M | DocGenome | | ☑️ | [v0.1](https://huggingface.co/U4R/StructTable-base/tree/v0.1) |
| StructEqTable-base | 2048 | ~300M | DocGenome | ☑️ | ☑️ | [v0.2](https://huggingface.co/U4R/StructTable-base/tree/v0.2) |
## Quick Demo
- Run the demo/demo.py
```shell script
cd tools/demo
python demo.py \
--image_path ./demo.png \
--ckpt_path ${CKPT_PATH} \
--output_format latex
```
- HTML or Markdown format output
Our model output Latex format code by default.
If you want to get other format like HTML or Markdown,
`pypandoc` support convert latex format code into HTML and Markdown format for simple table (table has no merge cell ).
```shell script
sudo apt install pandoc
pip install pypandoc
cd tools/demo
python demo.py \
--image_path ./demo.png \
--ckpt_path ${CKPT_PATH} \
--output_format html markdown
```
- Visualization Results
- The input data are sampled from SciHub domain.


## Acknowledgements
- [DocGenome](https://github.com/UniModal4Reasoning/DocGenome). An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Models.
- [ChartVLM](https://github.com/UniModal4Reasoning/ChartVLM). A Versatile Benchmark and Foundation Model for Complicated Chart Reasoning.
- [Pix2Struct](https://github.com/google-research/pix2struct). Screenshot Parsing as Pretraining for Visual Language Understanding.
- [UniMERNet](https://github.com/opendatalab/UniMERNet). A Universal Network for Real-World Mathematical Expression Recognition.
- [Donut](https://huggingface.co/naver-clova-ix/donut-base). The UniMERNet's Transformer Encoder-Decoder are referenced from Donut.
- [Nougat](https://github.com/facebookresearch/nougat). The tokenizer uses Nougat.
- [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM). Model inference acceleration uses TensorRT-LLM.
## License
StructEqTable is released under the [Apache License 2.0](LICENSE)
## Citation
If you find our models / code / papers useful in your research, please consider giving ⭐ and citations 📝, thx :)
```bibtex
@article{xia2024docgenome,
title={DocGenome: An Open Large-scale Scientific Document Benchmark for Training and Testing Multi-modal Large Language Models},
author={Xia, Renqiu and Mao, Song and Yan, Xiangchao and Zhou, Hongbin and Zhang, Bo and Peng, Haoyang and Pi, Jiahao and Fu, Daocheng and Wu, Wenjie and Ye, Hancheng and others},
journal={arXiv preprint arXiv:2406.11633},
year={2024}
}
```
## Contact Us
If you encounter any issues or have questions, please feel free to contact us via zhouhongbin@pjlab.org.cn.
|
lamm-mit/mistralai-Mistral-7B-v0.3
|
lamm-mit
| 2024-08-30T14:51:53Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-08-12T22:41:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BeaverAI/Theia-21B-v2d-GGUF
|
BeaverAI
| 2024-08-30T14:46:44Z | 11 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-30T11:45:26Z |
chatml seems broken. text completion looks fine
|
AshtonLKY/Whisper_ASR_ATC_v6
|
AshtonLKY
| 2024-08-30T14:45:46Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"asr-fyp",
"generated_from_trainer",
"en",
"dataset:AshtonLKY/Whisper_ASR_ATC",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-23T10:53:01Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- asr-fyp
- generated_from_trainer
datasets:
- AshtonLKY/Whisper_ASR_ATC
model-index:
- name: Whisper_ASR_ATC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_ASR_ATC
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the AshtonLKY/augmented_audio dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.091453
- eval_wer: 6.122661
- step: 11000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 25000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
CharlesLi/OpenELM-1_1B-DPO-full-2-5
|
CharlesLi
| 2024-08-30T14:39:22Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"openelm",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"custom_code",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-08-30T08:25:51Z |
---
library_name: transformers
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: OpenELM-1_1B-DPO-full-2-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenELM-1_1B-DPO-full-2-5
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1888
- Rewards/chosen: -13.5625
- Rewards/rejected: -17.0
- Rewards/accuracies: 0.7070
- Rewards/margins: 3.4062
- Logps/rejected: -1984.0
- Logps/chosen: -1672.0
- Logits/rejected: 6.2188
- Logits/chosen: 4.5312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.615 | 0.1047 | 100 | 0.6275 | -0.7383 | -0.9961 | 0.6719 | 0.2578 | -386.0 | -388.0 | -9.625 | -9.8125 |
| 0.5897 | 0.2093 | 200 | 0.6029 | -1.6641 | -2.0938 | 0.6934 | 0.4336 | -496.0 | -480.0 | -9.375 | -9.75 |
| 0.6457 | 0.3140 | 300 | 0.5886 | -1.3828 | -1.8281 | 0.6895 | 0.4473 | -470.0 | -454.0 | -13.5625 | -13.75 |
| 0.6271 | 0.4186 | 400 | 0.5936 | -1.7031 | -2.25 | 0.6992 | 0.5430 | -510.0 | -484.0 | -8.4375 | -8.8125 |
| 0.5746 | 0.5233 | 500 | 0.5886 | -2.0156 | -2.5625 | 0.6816 | 0.5430 | -540.0 | -516.0 | -6.6562 | -7.4062 |
| 0.5484 | 0.6279 | 600 | 0.5710 | -3.9531 | -4.6875 | 0.6973 | 0.7422 | -756.0 | -708.0 | -5.75 | -6.4375 |
| 0.5747 | 0.7326 | 700 | 0.5820 | -2.75 | -3.4844 | 0.6953 | 0.7227 | -632.0 | -592.0 | -6.5312 | -7.5938 |
| 0.5591 | 0.8373 | 800 | 0.5662 | -2.8594 | -3.5156 | 0.7090 | 0.6523 | -636.0 | -600.0 | -3.375 | -4.7812 |
| 0.5892 | 0.9419 | 900 | 0.5821 | -2.625 | -3.2344 | 0.7012 | 0.5977 | -608.0 | -576.0 | -4.8438 | -6.125 |
| 0.261 | 1.0466 | 1000 | 0.5852 | -3.9375 | -4.9688 | 0.7324 | 1.0078 | -780.0 | -708.0 | -0.8672 | -2.1094 |
| 0.2407 | 1.1512 | 1100 | 0.5943 | -4.0625 | -5.0 | 0.6895 | 0.9336 | -784.0 | -720.0 | -0.3672 | -1.9688 |
| 0.2348 | 1.2559 | 1200 | 0.6151 | -4.9375 | -5.9688 | 0.6777 | 1.0547 | -884.0 | -808.0 | 1.5312 | 0.2227 |
| 0.257 | 1.3605 | 1300 | 0.6005 | -4.4688 | -5.4688 | 0.6973 | 0.9883 | -832.0 | -760.0 | 1.5312 | -0.1445 |
| 0.2416 | 1.4652 | 1400 | 0.6023 | -5.1875 | -6.125 | 0.6855 | 0.9258 | -900.0 | -836.0 | 1.9141 | 0.2715 |
| 0.215 | 1.5699 | 1500 | 0.6062 | -5.5938 | -6.7188 | 0.6934 | 1.1328 | -960.0 | -872.0 | 1.9219 | 0.2637 |
| 0.2534 | 1.6745 | 1600 | 0.6013 | -4.6562 | -5.7188 | 0.7129 | 1.0391 | -856.0 | -780.0 | 2.7969 | 1.1406 |
| 0.2463 | 1.7792 | 1700 | 0.6173 | -5.2812 | -6.4375 | 0.6914 | 1.1484 | -928.0 | -844.0 | 1.9688 | 0.0977 |
| 0.23 | 1.8838 | 1800 | 0.6153 | -5.8438 | -7.0625 | 0.7090 | 1.2266 | -992.0 | -896.0 | 2.9062 | 1.0156 |
| 0.2092 | 1.9885 | 1900 | 0.6082 | -5.5625 | -6.7188 | 0.7051 | 1.1641 | -956.0 | -868.0 | 2.9375 | 1.0781 |
| 0.0271 | 2.0931 | 2000 | 0.7202 | -7.625 | -9.375 | 0.7207 | 1.7734 | -1224.0 | -1080.0 | 3.5781 | 1.8516 |
| 0.0367 | 2.1978 | 2100 | 0.8323 | -9.3125 | -11.5 | 0.7168 | 2.1406 | -1432.0 | -1248.0 | 4.7188 | 2.9219 |
| 0.0443 | 2.3025 | 2200 | 0.7840 | -8.0 | -10.0625 | 0.7324 | 2.0625 | -1296.0 | -1112.0 | 3.9375 | 2.0312 |
| 0.0302 | 2.4071 | 2300 | 0.7981 | -8.375 | -10.375 | 0.7070 | 2.0 | -1328.0 | -1152.0 | 4.625 | 2.8125 |
| 0.031 | 2.5118 | 2400 | 0.7786 | -7.9062 | -9.875 | 0.7129 | 1.9922 | -1280.0 | -1104.0 | 4.875 | 3.0156 |
| 0.018 | 2.6164 | 2500 | 0.8584 | -9.9375 | -12.125 | 0.6914 | 2.2031 | -1496.0 | -1312.0 | 5.4688 | 3.6719 |
| 0.0248 | 2.7211 | 2600 | 0.8079 | -8.625 | -10.6875 | 0.7012 | 2.0469 | -1352.0 | -1176.0 | 5.0312 | 3.0938 |
| 0.0263 | 2.8257 | 2700 | 0.8371 | -9.3125 | -11.375 | 0.6914 | 2.0156 | -1424.0 | -1248.0 | 5.2812 | 3.4531 |
| 0.033 | 2.9304 | 2800 | 0.8799 | -9.8125 | -12.1875 | 0.7207 | 2.4062 | -1504.0 | -1296.0 | 5.2188 | 3.3281 |
| 0.0118 | 3.0351 | 2900 | 0.8372 | -9.625 | -11.875 | 0.7246 | 2.2969 | -1472.0 | -1280.0 | 5.6562 | 3.7812 |
| 0.0094 | 3.1397 | 3000 | 0.9555 | -11.0 | -13.6875 | 0.7090 | 2.6875 | -1656.0 | -1416.0 | 6.0938 | 4.3125 |
| 0.0073 | 3.2444 | 3100 | 0.9687 | -11.375 | -14.125 | 0.7129 | 2.7344 | -1696.0 | -1456.0 | 5.9062 | 4.1875 |
| 0.0104 | 3.3490 | 3200 | 1.0111 | -11.75 | -14.5625 | 0.7070 | 2.8438 | -1744.0 | -1488.0 | 6.1875 | 4.4688 |
| 0.01 | 3.4537 | 3300 | 1.0564 | -12.125 | -15.0625 | 0.7051 | 2.9375 | -1792.0 | -1528.0 | 5.9375 | 4.2188 |
| 0.0089 | 3.5583 | 3400 | 0.9822 | -11.375 | -14.0625 | 0.7051 | 2.7031 | -1696.0 | -1448.0 | 5.875 | 4.2188 |
| 0.0106 | 3.6630 | 3500 | 1.0239 | -11.5625 | -14.375 | 0.7070 | 2.8125 | -1720.0 | -1472.0 | 5.9688 | 4.25 |
| 0.0099 | 3.7677 | 3600 | 1.0668 | -11.9375 | -14.9375 | 0.6973 | 3.0 | -1784.0 | -1512.0 | 6.125 | 4.375 |
| 0.0066 | 3.8723 | 3700 | 1.0938 | -12.75 | -15.875 | 0.7070 | 3.1406 | -1872.0 | -1592.0 | 6.2188 | 4.5312 |
| 0.0081 | 3.9770 | 3800 | 1.0255 | -11.6875 | -14.5625 | 0.7129 | 2.8906 | -1744.0 | -1488.0 | 5.9688 | 4.2812 |
| 0.0035 | 4.0816 | 3900 | 1.1112 | -12.75 | -15.875 | 0.7031 | 3.1406 | -1872.0 | -1592.0 | 6.2188 | 4.5312 |
| 0.002 | 4.1863 | 4000 | 1.1127 | -12.8125 | -16.0 | 0.7051 | 3.1562 | -1888.0 | -1600.0 | 6.1875 | 4.5 |
| 0.0036 | 4.2909 | 4100 | 1.1368 | -13.0 | -16.25 | 0.7031 | 3.25 | -1912.0 | -1616.0 | 6.1875 | 4.4688 |
| 0.0069 | 4.3956 | 4200 | 1.1589 | -13.25 | -16.625 | 0.7070 | 3.3125 | -1944.0 | -1640.0 | 6.2188 | 4.5312 |
| 0.0043 | 4.5003 | 4300 | 1.1756 | -13.4375 | -16.75 | 0.7031 | 3.375 | -1968.0 | -1656.0 | 6.2188 | 4.5312 |
| 0.0091 | 4.6049 | 4400 | 1.1842 | -13.5 | -16.875 | 0.7031 | 3.3906 | -1976.0 | -1664.0 | 6.2188 | 4.5312 |
| 0.0058 | 4.7096 | 4500 | 1.1865 | -13.5 | -16.875 | 0.7051 | 3.3906 | -1976.0 | -1664.0 | 6.2188 | 4.5312 |
| 0.0034 | 4.8142 | 4600 | 1.1880 | -13.5625 | -17.0 | 0.7051 | 3.3906 | -1984.0 | -1672.0 | 6.2188 | 4.5312 |
| 0.006 | 4.9189 | 4700 | 1.1888 | -13.5625 | -17.0 | 0.7070 | 3.4062 | -1984.0 | -1672.0 | 6.2188 | 4.5312 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1
|
John6666/pony-kanoato-symphony-final-sdxl
|
John6666
| 2024-08-30T14:34:56Z | 116 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"style",
"semi-realistic",
"illustration",
"paintings",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T14:30:14Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- style
- semi-realistic
- illustration
- paintings
- pony
---
Original model is [here](https://civitai.com/models/645642?modelVersionId=722292).
This model created by [Kanoato](https://civitai.com/user/Kanoato).
|
QuantFactory/Bielik-11B-v2.2-Instruct-GGUF
|
QuantFactory
| 2024-08-30T14:26:16Z | 18 | 1 |
transformers
|
[
"transformers",
"gguf",
"finetuned",
"pl",
"arxiv:2005.01643",
"arxiv:2309.11235",
"arxiv:2006.09092",
"arxiv:2402.13228",
"base_model:speakleash/Bielik-11B-v2",
"base_model:quantized:speakleash/Bielik-11B-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T13:19:21Z |
---
license: apache-2.0
base_model: speakleash/Bielik-11B-v2
language:
- pl
library_name: transformers
tags:
- finetuned
inference:
parameters:
temperature: 0.2
widget:
- messages:
- role: user
content: Co przedstawia polskie godło?
extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
---

# QuantFactory/Bielik-11B-v2.2-Instruct-GGUF
This is quantized version of [speakleash/Bielik-11B-v2.2-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct) created using llama.cpp
# Original Model Card
<p align="center">
<img src="https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct/raw/main/speakleash_cyfronet.png">
</p>
# Bielik-11B-v2.2-Instruct
Bielik-11B-v2.2-Instruct is a generative text model featuring 11 billion parameters.
It is an instruct fine-tuned version of the [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2).
Forementioned model stands as a testament to the unique collaboration between the open-science/open-souce project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH.
Developed and trained on Polish text corpora, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure,
specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH.
The creation and training of the Bielik-11B-v2.2-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer,
enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes.
As a result, the model exhibits an exceptional ability to understand and process the Polish language, providing accurate responses and performing a variety of linguistic tasks with high precision.
🎥 Demo: https://chat.bielik.ai
🗣️ Chat Arena<span style="color:red;">*</span>: https://arena.speakleash.org.pl/
<span style="color:red;">*</span>Chat Arena is a platform for testing and comparing different AI language models, allowing users to evaluate their performance and quality.
## Model
The [SpeakLeash](https://speakleash.org/) team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated with [Mixtral 8x22B](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) and used in training. The dataset used for training comprised over 20 million instructions, consisting of more than 10 billion tokens. The instructions varied in quality, leading to a deterioration in the model’s performance. To counteract this while still allowing ourselves to utilize the aforementioned datasets, several improvements were introduced:
* Weighted tokens level loss - a strategy inspired by [offline reinforcement learning](https://arxiv.org/abs/2005.01643) and [C-RLFT](https://arxiv.org/abs/2309.11235)
* Adaptive learning rate inspired by the study on [Learning Rates as a Function of Batch Size](https://arxiv.org/abs/2006.09092)
* Masked prompt tokens
To align the model with user preferences we tested many different techniques: DPO, PPO, KTO, SiMPO. Finally the [DPO-Positive](https://arxiv.org/abs/2402.13228) method was employed, utilizing both generated and manually corrected examples, which were scored by a metamodel. A dataset comprising over 66,000 examples of varying lengths to address different aspects of response style. It was filtered and evaluated by the reward model to select instructions with the right level of difference between chosen and rejected. The novelty introduced in DPO-P was multi-turn conversations introduction.
Bielik-11B-v2.2-Instruct has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo) implemented by [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/). This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
* **Model ref:** speakleash:0deb975c3780df3a3ae98b619185faa1
### Quantized models:
We know that some people want to explore smaller models or don't have the resources to run a full model. Therefore, we have prepared quantized versions of the Bielik-11B-v2.2-Instruct model in separate repositories:
- [GGUF - Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-GGUF)
- [GPTQ - 4bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-GPTQ)
- HQQ - [4bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-HQQ-4bit-128gs), [8bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-HQQ-8bit-128gs)
- [AWQ - 4bit GEMM](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-AWQ)
- EXL2 - [4.5bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-EXL2-4.5bit), [6.5bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-EXL2-6.5bit)
- MLX - [4bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-MLX-4bit), [8bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-MLX-8bit)
- Quanto - [4bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-Quanto-4bit), [8bit](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-Quanto-8bit)
- [FP8](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-FP8) (vLLM, SGLang - Ada Lovelace, Hopper optimized)
- [INT8 W8A8](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-W8A8) (vLLM INT8 quantization Weights=8bits and Activations=8bits)
- [GGUF - experimental - IQ imatrix IQ2_XXS, IQ3_XXS, IQ4_XS and calibrated Q4_K_M, Q5_K_M, Q6_K, Q8_0](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct-GGUF-IQ-Imatrix)
Please note that quantized models may offer lower quality of generated answers compared to full sized variatns.
### Chat template
Bielik-11B-v2.2-Instruct uses [ChatML](https://github.com/cognitivecomputations/OpenChatML) as the prompt format.
E.g.
```
prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n"
completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|> \n"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model_name = "speakleash/Bielik-11B-v2.2-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."},
{"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
{"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
{"role": "user", "content": "Która jest najcieplejsza?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = input_ids.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Fully formated input conversation by apply_chat_template from previous example:
```
<s><|im_start|> system
Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|>
<|im_start|> user
Jakie mamy pory roku w Polsce?<|im_end|>
<|im_start|> assistant
W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|>
<|im_start|> user
Która jest najcieplejsza?<|im_end|>
```
## Evaluation
Bielik-11B-v2.2-Instruct has been evaluated on several benchmarks to assess its performance across various tasks and languages. These benchmarks include:
1. Open PL LLM Leaderboard
2. Open LLM Leaderboard
3. Polish MT-Bench
4. Polish EQ-Bench (Emotional Intelligence Benchmark)
5. MixEval
The following sections provide detailed results for each of these benchmarks, demonstrating the model's capabilities in both Polish and English language tasks.
### Open PL LLM Leaderboard
Models have been evaluated on [Open PL LLM Leaderboard](https://huggingface.co/spaces/speakleash/open_pl_llm_leaderboard) 5-shot. The benchmark evaluates models in NLP tasks like sentiment analysis, categorization, text classification but does not test chatting skills. Average column is an average score among all tasks normalized by baseline scores.
| Model | Parameters (B)| Average |
|---------------------------------|------------|---------|
| Meta-Llama-3.1-405B-Instruct-FP8,API | 405 | 69.44 |
| Mistral-Large-Instruct-2407 | 123 | 69.11 |
| Qwen2-72B-Instruct | 72 | 65.87 |
| **Bielik-11B-v2.2-Instruct** | **11** | **65.57** |
| Meta-Llama-3.1-70B-Instruct | 70 | 65.49 |
| Bielik-11B-v2.1-Instruct | 11 | 65.45 |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 65.23 |
| Bielik-11B-v2.0-Instruct | 11 | 64.98 |
| Meta-Llama-3-70B-Instruct | 70 | 64.45 |
| Athene-70B | 70 | 63.65 |
| WizardLM-2-8x22B | 141 | 62.35 |
| Qwen1.5-72B-Chat | 72 | 58.67 |
| Qwen2-57B-A14B-Instruct | 57 | 56.89 |
| glm-4-9b-chat | 9 | 56.61 |
| aya-23-35B | 35 | 56.37 |
| Phi-3.5-MoE-instruct | 41.9 | 56.34 |
| openchat-3.5-0106-gemma | 7 | 55.69 |
| Mistral-Nemo-Instruct-2407 | 12 | 55.27 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.24 |
| Mixtral-8x7B-Instruct-v0.1 | 46.7 | 55.07 |
| Bielik-7B-Instruct-v0.1 | 7 | 44.70 |
| trurl-2-13b-academic | 13 | 36.28 |
| trurl-2-7b | 7 | 26.93 |
The results from the Open PL LLM Leaderboard demonstrate the exceptional performance of Bielik-11B-v2.2-Instruct:
1. Superior performance in its class: Bielik-11B-v2.2-Instruct outperforms all other models with less than 70B parameters. This is a significant achievement, showcasing its efficiency and effectiveness despite having fewer parameters than many competitors.
2. Competitive with larger models: with a score of 65.57, Bielik-11B-v2.2-Instruct performs on par with models in the 70B parameter range. This indicates that it achieves comparable results to much larger models, demonstrating its advanced architecture and training methodology.
3. Substantial improvement over previous version: the model shows a marked improvement over its predecessor, Bielik-7B-Instruct-v0.1, which scored 43.64. This leap in performance highlights the successful enhancements and optimizations implemented in this newer version.
4. Leading position for Polish language models: in the context of Polish language models, Bielik-11B-v2.2-Instruct stands out as a leader. There are no other competitive models specifically tailored for the Polish language that match its performance, making it a crucial resource for Polish NLP tasks.
These results underscore Bielik-11B-v2.2-Instruct's position as a state-of-the-art model for Polish language processing, offering high performance with relatively modest computational requirements.
#### Open PL LLM Leaderboard - Generative Tasks Performance
This section presents a focused comparison of generative Polish language task performance between Bielik models and GPT-3.5. The evaluation is limited to generative tasks due to the constraints of assessing OpenAI models. The comprehensive nature and associated costs of the benchmark explain the limited number of models evaluated.
| Model | Parameters (B) | Average g |
|-------------------------------|----------------|---------------|
| Bielik-11B-v2.1-Instruct | 11 | 66.58 |
| **Bielik-11B-v2.2-Instruct** | 11 | **66.11** |
| Bielik-11B-v2.0-Instruct | 11 | 65.58 |
| gpt-3.5-turbo-instruct | Unknown | 55.65 |
The performance variation among Bielik versions is minimal, indicating consistent quality across iterations. Bielik-11B-v2.2-Instruct demonstrates an impressive 18.8% performance advantage over GPT-3.5.
### Open LLM Leaderboard
The Open LLM Leaderboard evaluates models on various English language tasks, providing insights into the model's performance across different linguistic challenges.
| Model | AVG | arc_challenge | hellaswag | truthfulqa_mc2 | mmlu | winogrande | gsm8k |
|--------------------------|-------|---------------|-----------|----------------|-------|------------|-------|
| **Bielik-11B-v2.2-Instruct** | **69.86** | 59.90 | 80.16 | 58.34 | 64.34 | 75.30 | 81.12 |
| Bielik-11B-v2.1-Instruct | 69.82 | 59.56 | 80.20 | 59.35 | 64.18 | 75.06 | 80.59 |
| Bielik-11B-v2.0-Instruct | 68.04 | 58.62 | 78.65 | 54.65 | 63.71 | 76.32 | 76.27 |
| Bielik-11B-v2 | 65.87 | 60.58 | 79.84 | 46.13 | 63.06 | 77.82 | 67.78 |
| Mistral-7B-Instruct-v0.2 | 65.71 | 63.14 | 84.88 | 68.26 | 60.78 | 77.19 | 40.03 |
| Bielik-7B-Instruct-v0.1 | 51.26 | 47.53 | 68.91 | 49.47 | 46.18 | 65.51 | 29.95 |
Bielik-11B-v2.2-Instruct shows impressive performance on English language tasks:
1. Significant improvement over its base model (4-point increase).
2. Substantial 18-point improvement over Bielik-7B-Instruct-v0.1.
These results demonstrate Bielik-11B-v2.2-Instruct's versatility in both Polish and English, highlighting the effectiveness of its instruction tuning process.
### Polish MT-Bench
The Bielik-11B-v.2.2-Instruct (16 bit) model was also evaluated using the MT-Bench benchmark. The quality of the model was evaluated using the English version (original version without modifications) and the Polish version created by Speakleash (tasks and evaluation in Polish, the content of the tasks was also changed to take into account the context of the Polish language).
#### MT-Bench English
| Model | Score |
|-----------------|----------|
| Bielik-11B-v2.1 | 8.537500 |
| **Bielik-11B-v2.2** | **8.390625** |
| Bielik-11B-v2.0 | 8.159375 |
#### MT-Bench Polish
| Model | Parameters (B) | Score |
|-------------------------------------|----------------|----------|
| Qwen2-72B-Instruct | 72 | 8.775000 |
| Mistral-Large-Instruct-2407 (123B) | 123 | 8.662500 |
| gemma-2-27b-it | 27 | 8.618750 |
| Mixtral-8x22b | 141 | 8.231250 |
| Meta-Llama-3.1-405B-Instruct | 405 | 8.168750 |
| Meta-Llama-3.1-70B-Instruct | 70 | 8.150000 |
| **Bielik-11B-v2.2-Instruct** | **11** | **8.115625** |
| Bielik-11B-v2.1-Instruct | 11 | 7.996875 |
| gpt-3.5-turbo | Unknown | 7.868750 |
| Mixtral-8x7b | 46.7 | 7.637500 |
| Bielik-11B-v2.0-Instruct | 11 | 7.562500 |
| Mistral-Nemo-Instruct-2407 | 12 | 7.368750 |
| openchat-3.5-0106-gemma | 7 | 6.812500 |
| Mistral-7B-Instruct-v0.2 | 7 | 6.556250 |
| Meta-Llama-3.1-8B-Instruct | 8 | 6.556250 |
| Bielik-7B-Instruct-v0.1 | 7 | 6.081250 |
| Mistral-7B-Instruct-v0.3 | 7 | 5.818750 |
| Polka-Mistral-7B-SFT | 7 | 4.518750 |
| trurl-2-7b | 7 | 2.762500 |
Key observations on Bielik-11B-v2.2 performance:
1. Strong performance among mid-sized models: Bielik-11B-v2.2-Instruct scored **8.115625**, placing it ahead of several well-known models like GPT-3.5-turbo (7.868750) and Mixtral-8x7b (7.637500). This indicates that Bielik-11B-v2.2 is competitive among mid-sized models, particularly those in the 11B-70B parameter range.
2. Competitive against larger models: Bielik-11B-v2.2-Instruct performs close to Meta-Llama-3.1-70B-Instruct (8.150000), Meta-Llama-3.1-405B-Instruct (8.168750) and even Mixtral-8x22b (8.231250), which have significantly more parameters. This efficiency in performance relative to size could make it an attractive option for tasks where resource constraints are a consideration. Bielik 100% generated answers in Polish, while other models (not typically trained for Polish) can answer Polish questions in English.
3. Significant improvement over previous versions: compared to its predecessor, **Bielik-7B-Instruct-v0.1**, which scored **6.081250**, the Bielik-11B-v2.2-Instruct shows a significant improvement. The score increased by more than **2 points**, highlighting substantial advancements in model quality, optimization and training methodology.
For more information - answers to test tasks and values in each category, visit the [MT-Bench PL](https://huggingface.co/spaces/speakleash/mt-bench-pl) website.
### Polish EQ-Bench
[Polish Emotional Intelligence Benchmark for LLMs](https://huggingface.co/spaces/speakleash/polish_eq-bench)
| Model | Parameters (B) | Score |
|-------------------------------|--------|-------|
| Mistral-Large-Instruct-2407 | 123 | 78.07 |
| Meta-Llama-3.1-405B-Instruct-FP8 | 405 | 77.23 |
| gpt-4o-2024-08-06 | ? | 75.15 |
| gpt-4-turbo-2024-04-09 | ? | 74.59 |
| Meta-Llama-3.1-70B-Instruct | 70 | 72.53 |
| Qwen2-72B-Instruct | 72 | 71.23 |
| Meta-Llama-3-70B-Instruct | 70 | 71.21 |
| gpt-4o-mini-2024-07-18 | ? | 71.15 |
| WizardLM-2-8x22B | 141 | 69.56 |
| **Bielik-11B-v2.2-Instruct** | **11** | **69.05** |
| Bielik-11B-v2.0-Instruct | 11 | 68.24 |
| Qwen1.5-72B-Chat | 72 | 68.03 |
| Mixtral-8x22B-Instruct-v0.1 | 141 | 67.63 |
| Bielik-11B-v2.1-Instruct | 11 | 60.07 |
| Qwen1.5-32B-Chat | 32 | 59.63 |
| openchat-3.5-0106-gemma | 7 | 59.58 |
| aya-23-35B | 35 | 58.41 |
| gpt-3.5-turbo | ? | 57.7 |
| Qwen2-57B-A14B-Instruct | 57 | 57.64 |
| Mixtral-8x7B-Instruct-v0.1 | 47 | 57.61 |
| SOLAR-10.7B-Instruct-v1.0 | 10.7 | 55.21 |
| Mistral-7B-Instruct-v0.2 | 7 | 47.02 |
The results show that Bielik-11B-v2.2-Instruct is the best performing model among those with less than 70B parameters. With a score of 69.05, it outperforms larger models like Qwen1.5-72B-Chat and Mixtral-8x22B-Instruct-v0.1, demonstrating its exceptional efficiency and effectiveness despite its smaller parameter count.
### MixEval
MixEval is a ground-truth-based English benchmark designed to evaluate Large Language Models (LLMs) efficiently and effectively. Key features of MixEval include:
1. Derived from off-the-shelf benchmark mixtures
2. Highly capable model ranking with a 0.96 correlation to Chatbot Arena
3. Local and quick execution, requiring only 6% of the time and cost compared to running MMLU
This benchmark provides a robust and time-efficient method for assessing LLM performance, making it a valuable tool for ongoing model evaluation and comparison.
| Model | MixEval | MixEval-Hard |
|-------------------------------|---------|--------------|
| Bielik-11B-v2.1-Instruct | 74.55 | 45.00 |
| **Bielik-11B-v2.2-Instruct** | 72.35 | 39.65 |
| Bielik-11B-v2.0-Instruct | 72.10 | 40.20 |
| Mistral-7B-Instruct-v0.2 | 70.00 | 36.20 |
The results show that Bielik-11B-v2.2-Instruct performs well on the MixEval benchmark, achieving a score of 72.35 on the standard MixEval and 39.65 on MixEval-Hard. Notably, Bielik-11B-v2.2-Instruct significantly outperforms Mistral-7B-Instruct-v0.2 on both metrics, demonstrating its improved capabilities despite being based on a similar architecture.
### Chat Arena PL
Chat Arena PL is a human-evaluated benchmark that provides a direct comparison of model performance through head-to-head battles. Unlike the automated benchmarks mentioned above, this evaluation relies on human judgment to assess the quality and effectiveness of model responses. The results offer valuable insights into how different models perform in real-world, conversational scenarios as perceived by human evaluators.
Results accessed on 2024-08-26.
| # | Model | Battles | Won | Lost | Draws | Win % | ELO |
|---|-------|-------|---------|-----------|--------|-------------|-----|
| 1 | **Bielik-11B-v2.2-Instruct** | 92 | 72 | 14 | 6 | **83.72%** | 1234 |
| 2 | Bielik-11B-v2.1-Instruct | 240 | 171 | 50 | 19 | 77.38% | 1174 |
| 3 | gpt-4o-mini | 639 | 402 | 117 | 120 | 77.46% | 1141 |
| 4 | Mistral Large 2 (2024-07) | 324 | 188 | 69 | 67 | 73.15% | 1125 |
| 5 | Llama-3.1-405B | 548 | 297 | 144 | 107 | 67.35% | 1090 |
| 6 | Bielik-11B-v2.0-Instruct | 1289 | 695 | 352 | 242 | 66.38% | 1059 |
| 7 | Llama-3.1-70B | 498 | 221 | 187 | 90 | 54.17% | 1033 |
| 8 | Bielik-1-7B | 2041 | 1029 | 638 | 374 | 61.73% | 1020 |
| 9 | Mixtral-8x22B-v0.1 | 432 | 166 | 167 | 99 | 49.85% | 1018 |
| 10 | Qwen2-72B | 451 | 179 | 177 | 95 | 50.28% | 1011 |
| 11 | gpt-3.5-turbo | 2186 | 1007 | 731 | 448 | 57.94% | 1008 |
| 12 | Llama-3.1-8B | 440 | 155 | 227 | 58 | 40.58% | 975 |
| 13 | Mixtral-8x7B-v0.1 | 1997 | 794 | 804 | 399 | 49.69% | 973 |
| 14 | Llama-3-70b | 2008 | 733 | 909 | 366 | 44.64% | 956 |
| 15 | Mistral Nemo (2024-07) | 301 | 84 | 164 | 53 | 33.87% | 954 |
| 16 | Llama-3-8b | 1911 | 473 | 1091 | 347 | 30.24% | 909 |
| 17 | gemma-7b-it | 1928 | 418 | 1221 | 289 | 25.5% | 888 |
The results show that Bielik-11B-v2.2-Instruct outperforms all other models in this benchmark, achieving the highest win percentage (83.72%) and ELO score (1234). This impressive performance demonstrates its effectiveness in real-world conversational scenarios, as judged by human evaluators.
## Limitations and Biases
Bielik-11B-v2.2-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
Bielik-11B-v2.2-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v2.2-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
## Citation
Please cite this model using the following format:
```
@misc{Bielik11Bv2i,
title = {Bielik-11B-v2.2-Instruct model card},
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}},
year = {2024},
url = {https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct},
note = {Accessed: 2024-08-28}, % change this date
urldate = {2024-08-28} % change this date
}
@unpublished{Bielik11Bv2a,
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof},
title = {Bielik: A Family of Large Language Models for the Polish Language - Development, Insights, and Evaluation},
year = {2024},
}
```
## Responsible for training the model
* [Krzysztof Ociepa](https://www.linkedin.com/in/krzysztof-ociepa-44886550/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, data preparation, process optimization and oversight of training
* [Łukasz Flis](https://www.linkedin.com/in/lukasz-flis-0a39631/)<sup>Cyfronet AGH</sup> - coordinating and supervising the training
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - conceptualizing and coordinating DPO training, data preparation
* [Adrian Gwoździej](https://www.linkedin.com/in/adrgwo/)<sup>SpeakLeash</sup> - data preparation and ensuring data quality
* [Krzysztof Wróbel](https://www.linkedin.com/in/wrobelkrzysztof/)<sup>SpeakLeash</sup> - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model:
[Sebastian Kondracki](https://www.linkedin.com/in/sebastian-kondracki/),
[Igor Ciuciura](https://www.linkedin.com/in/igor-ciuciura-1763b52a6/),
[Paweł Kiszczak](https://www.linkedin.com/in/paveu-kiszczak/),
[Szymon Baczyński](https://www.linkedin.com/in/szymon-baczynski/),
[Jacek Chwiła](https://www.linkedin.com/in/jacek-chwila/),
[Maria Filipkowska](https://www.linkedin.com/in/maria-filipkowska/),
[Jan Maria Kowalski](https://www.linkedin.com/in/janmariakowalski/),
[Karol Jezierski](https://www.linkedin.com/in/karol-jezierski/),
[Kacper Milan](https://www.linkedin.com/in/kacper-milan/),
[Jan Sowa](https://www.linkedin.com/in/janpiotrsowa/),
[Len Krawczyk](https://www.linkedin.com/in/magdalena-krawczyk-7810942ab/),
[Marta Seidler](https://www.linkedin.com/in/marta-seidler-751102259/),
[Agnieszka Ratajska](https://www.linkedin.com/in/agnieszka-ratajska/),
[Krzysztof Koziarek](https://www.linkedin.com/in/krzysztofkoziarek/),
[Szymon Pepliński](http://linkedin.com/in/szymonpeplinski/),
[Filip Bogacz](https://linkedin.com/in/Fibogacci),
[Agnieszka Kosiak](https://www.linkedin.com/in/agn-kosiak),
[Izabela Babis](https://www.linkedin.com/in/izabela-babis-2274b8105/),
[Nina Babis](https://www.linkedin.com/in/nina-babis-00055a140/).
Members of the ACK Cyfronet AGH team providing valuable support and expertise:
[Szymon Mazurek](https://www.linkedin.com/in/sz-mazurek-ai/),
[Marek Magryś](https://www.linkedin.com/in/magrys/),
[Mieszko Cholewa ](https://www.linkedin.com/in/mieszko-cholewa-613726301/).
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.com/invite/TunEeCTw).
|
Kanon14/whisper-tiny-dv
|
Kanon14
| 2024-08-30T14:25:16Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-08-30T13:58:37Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 31.936245572609206
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-dv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6532
- Wer Ortho: 32.0173
- Wer: 31.9362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:-------:|
| 0.0012 | 17.8571 | 500 | 0.6532 | 32.0173 | 31.9362 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
John6666/nsfw-master-flux-lora-merged-with-flux1-dev-fp16-v10-fp8-flux
|
John6666
| 2024-08-30T14:21:44Z | 1,166 | 5 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"Flux",
"fp8",
"float8_e4m3fn",
"realistic",
"photorealistic",
"en",
"license:other",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
] |
text-to-image
| 2024-08-30T14:06:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- Flux
- fp8
- float8_e4m3fn
- realistic
- photorealistic
---
Original model is [here](https://civitai.com/models/701671/nsfw-master-flux-lora-merged-with-flux1-dev-fp16?modelVersionId=785079).
This model created by [Defozo](https://civitai.com/user/Defozo).
## Notice
This is an experimental conversion in Spaces using a homebrew script. serverless Inference API does not currently support torch float8_e4m3fn, so it does not work.
I have not been able to confirm if the conversion is working properly.
Please consider this as a test run only.
|
TonyStarkD99/CLIP-Crop_Disease
|
TonyStarkD99
| 2024-08-30T14:21:11Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-08-30T13:26:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf
|
RichardErkhov
| 2024-08-30T14:20:34Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-08-30T13:48:08Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
tinyllama_frankenmerge - GGUF
- Model creator: https://huggingface.co/bn22/
- Original model: https://huggingface.co/bn22/tinyllama_frankenmerge/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tinyllama_frankenmerge.Q2_K.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q2_K.gguf) | Q2_K | 0.55GB |
| [tinyllama_frankenmerge.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.IQ3_XS.gguf) | IQ3_XS | 0.61GB |
| [tinyllama_frankenmerge.IQ3_S.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.IQ3_S.gguf) | IQ3_S | 0.64GB |
| [tinyllama_frankenmerge.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q3_K_S.gguf) | Q3_K_S | 0.64GB |
| [tinyllama_frankenmerge.IQ3_M.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.IQ3_M.gguf) | IQ3_M | 0.67GB |
| [tinyllama_frankenmerge.Q3_K.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q3_K.gguf) | Q3_K | 0.71GB |
| [tinyllama_frankenmerge.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q3_K_M.gguf) | Q3_K_M | 0.71GB |
| [tinyllama_frankenmerge.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q3_K_L.gguf) | Q3_K_L | 0.77GB |
| [tinyllama_frankenmerge.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.IQ4_XS.gguf) | IQ4_XS | 0.79GB |
| [tinyllama_frankenmerge.Q4_0.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q4_0.gguf) | Q4_0 | 0.82GB |
| [tinyllama_frankenmerge.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.IQ4_NL.gguf) | IQ4_NL | 0.83GB |
| [tinyllama_frankenmerge.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q4_K_S.gguf) | Q4_K_S | 0.83GB |
| [tinyllama_frankenmerge.Q4_K.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q4_K.gguf) | Q4_K | 0.87GB |
| [tinyllama_frankenmerge.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q4_K_M.gguf) | Q4_K_M | 0.87GB |
| [tinyllama_frankenmerge.Q4_1.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q4_1.gguf) | Q4_1 | 0.91GB |
| [tinyllama_frankenmerge.Q5_0.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q5_0.gguf) | Q5_0 | 1.0GB |
| [tinyllama_frankenmerge.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q5_K_S.gguf) | Q5_K_S | 1.0GB |
| [tinyllama_frankenmerge.Q5_K.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q5_K.gguf) | Q5_K | 1.02GB |
| [tinyllama_frankenmerge.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q5_K_M.gguf) | Q5_K_M | 1.02GB |
| [tinyllama_frankenmerge.Q5_1.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q5_1.gguf) | Q5_1 | 1.08GB |
| [tinyllama_frankenmerge.Q6_K.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q6_K.gguf) | Q6_K | 1.18GB |
| [tinyllama_frankenmerge.Q8_0.gguf](https://huggingface.co/RichardErkhov/bn22_-_tinyllama_frankenmerge-gguf/blob/main/tinyllama_frankenmerge.Q8_0.gguf) | Q8_0 | 1.53GB |
Original model description:
---
license: apache-2.0
tags:
- merge
- mergekit
---
# tinyllama_frankenmerge
This model is a merge of the following models made with [mergekit](https://github.com/cg123/mergekit):
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T)
* [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
## 🧩 Configuration
```yml
slices:
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T
layer_range: [0, 16]
- sources:
- model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
layer_range: [6, 22]
merge_method: passthrough
dtype: float16
```
|
WESTARBJUNIORSUPERO/qametrik_ai_llm_8b
|
WESTARBJUNIORSUPERO
| 2024-08-30T14:13:19Z | 11 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T14:10:09Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** WESTARBJUNIORSUPERO
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
williamdeli/image_classification
|
williamdeli
| 2024-08-30T14:06:50Z | 16 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-16T18:06:25Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: image_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# image_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5747
- Accuracy: 0.883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6533 | 0.992 | 62 | 2.4755 | 0.832 |
| 1.7798 | 2.0 | 125 | 1.7368 | 0.866 |
| 1.5615 | 2.976 | 186 | 1.5850 | 0.893 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
mradermacher/Athena-gemma-2-9b-it-i1-GGUF
|
mradermacher
| 2024-08-30T14:04:08Z | 46 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:EpistemeAI/Athena-gemma-2-9b-it",
"base_model:quantized:EpistemeAI/Athena-gemma-2-9b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-08-30T12:38:23Z |
---
base_model: EpistemeAI/Athena-gemma-2-9b-it
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/EpistemeAI/Athena-gemma-2-9b-it
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Athena-gemma-2-9b-it-i1-GGUF/resolve/main/Athena-gemma-2-9b-it.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Mohamed1213/donut-docvqa-finetuned-by-Captos
|
Mohamed1213
| 2024-08-30T13:53:23Z | 47 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-08-22T11:03:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jvelja/BERT_gemma2b-instrumentalEmergence-strongerOversight_0
|
jvelja
| 2024-08-30T13:52:51Z | 117 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-08-29T17:00:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
edumorfosis/tokflux
|
edumorfosis
| 2024-08-30T13:32:49Z | 5 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-08-30T12:57:54Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mezzihoussem/CHatbot2epoch
|
mezzihoussem
| 2024-08-30T13:25:12Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-2-7b-chat-bnb-4bit",
"base_model:quantized:unsloth/llama-2-7b-chat-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-30T13:15:17Z |
---
base_model: unsloth/llama-2-7b-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** mezzihoussem
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.