modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-04 12:28:55
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-04 12:28:29
card
stringlengths
11
1.01M
phuongntc/rewardSFT_vietlarge_sum_4000
phuongntc
2024-09-25T02:04:35Z
101
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T02:01:24Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASSERT-KTH/RepairLLaMA-IR2-OR2
ASSERT-KTH
2024-09-25T01:52:19Z
54
0
peft
[ "peft", "region:us" ]
null
2024-09-25T01:48:16Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
SongTonyLi/OpenELM-450M-CPT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge
SongTonyLi
2024-09-25T01:50:42Z
117
0
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-25T01:39:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASSERT-KTH/RepairLLaMA-IR1-OR1
ASSERT-KTH
2024-09-25T01:35:29Z
36
0
peft
[ "peft", "region:us" ]
null
2024-09-25T01:24:37Z
--- library_name: peft --- ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: True - load_in_4bit: False - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0
Dhurkesh1/tomatoDiseaseClassifier
Dhurkesh1
2024-09-25T01:33:24Z
708
0
null
[ "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "region:us" ]
image-classification
2024-09-25T01:33:11Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: tomatoDiseaseClassifier results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9908371567726135 --- # tomatoDiseaseClassifier Autogenerated by HuggingPics🤗🖼️ This model is designed for classifying [YOUR TASK] images. It was fine-tuned using PyTorch Lightning and Hugging Face transformers. ## Example Images #### Tomato_Bacterial_spot ![Tomato_Bacterial_spot](images/Tomato_Bacterial_spot.jpg) #### Tomato_Early_blight ![Tomato_Early_blight](images/Tomato_Early_blight.jpg) #### Tomato_Late_blight ![Tomato_Late_blight](images/Tomato_Late_blight.jpg) #### Tomato_Leaf_Mold ![Tomato_Leaf_Mold](images/Tomato_Leaf_Mold.jpg) #### Tomato_Septoria_leaf_spot ![Tomato_Septoria_leaf_spot](images/Tomato_Septoria_leaf_spot.jpg) #### Tomato_Spider_mites_Two_spotted_spider_mite ![Tomato_Spider_mites_Two_spotted_spider_mite](images/Tomato_Spider_mites_Two_spotted_spider_mite.jpg) #### Tomato__Target_Spot ![Tomato__Target_Spot](images/Tomato__Target_Spot.jpg) #### Tomato__Tomato_YellowLeaf__Curl_Virus ![Tomato__Tomato_YellowLeaf__Curl_Virus](images/Tomato__Tomato_YellowLeaf__Curl_Virus.jpg) #### Tomato__Tomato_mosaic_virus ![Tomato__Tomato_mosaic_virus](images/Tomato__Tomato_mosaic_virus.jpg) #### Tomato_healthy ![Tomato_healthy](images/Tomato_healthy.jpg)
codertrish/FineLlama-3.1-8B-GGUF
codertrish
2024-09-25T01:29:36Z
21
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-09-25T00:45:57Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - gguf --- # Uploaded model - **Developed by:** codertrish - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
meetkai/functionary-medium-v3.2
meetkai
2024-09-25T01:23:56Z
29
6
null
[ "tensorboard", "safetensors", "llama", "custom_code", "license:mit", "region:us" ]
null
2024-08-16T16:00:28Z
--- license: mit --- # Model Card for functionary-medium-v3.2 **This model was based on [meta-llama/Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)** [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter ## How to Get Started We provide custom code for parsing raw model responses into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the users to read the function-calling output of the model easily. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-medium-v3.2") model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-medium-v3.2", device_map="auto", trust_remote_code=True) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}] final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False) inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda") pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer) print(tokenizer.decode(pred.cpu()[0])) ``` ## Prompt Template We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|start_header_id|>system<|end_header_id|> You are capable of executing available function(s) if required. Only execute function(s) when absolutely necessary. Ask for the required input to:recipient==all Use JSON for function arguments. Respond in this format: >>>${recipient} ${content} Available functions: // Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions<|eot_id|><|start_header_id|>user<|end_header_id|> What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v3.llama3.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-small-v3.1
meetkai
2024-09-25T01:23:47Z
156
17
transformers
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "conversational", "custom_code", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-07-26T16:28:21Z
--- license: mit --- # Model Card for functionary-small-v3.1 **This model was based on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)**, using Meta's original prompt template as described in: [User-defined Custom tool calling](https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/#user-defined-custom-tool-calling) [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter ## How to Get Started We provide custom code for parsing raw model responses into a JSON object containing role, content and tool_calls fields. This enables the users to read the function-calling output of the model easily. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.1") model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.1", device_map="auto", trust_remote_code=True) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}] final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False) inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda") pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer) print(tokenizer.decode(pred.cpu()[0])) ``` ## Prompt Template We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|start_header_id|>system<|end_header_id|> Environment: ipython Cutting Knowledge Date: December 2023 You have access to the following functions: Use the function 'get_current_weather' to 'Get the current weather' {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}},"required": ["location"]}} Think very carefully before calling functions. If a you choose to call a function ONLY reply in the following format: <{start_tag}={function_name}>{parameters}{end_tag} where start_tag => `<function` parameters => a JSON dict with the function argument name as key and function argument value as value. end_tag => `</function>` Here is an example, <function=example_function_name>{"example_name": "example_value"}</function> Reminder: - If looking for real time information use relevant functions before falling back to brave_search - Function calls MUST follow the specified format, start with <function= and end with </function> - Required parameters MUST be specified - Only call one function at a time - Put the entire function call reply on one line <|eot_id|><|start_header_id|>user<|end_header_id|> What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v3-llama3.1.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-small-v2.4
meetkai
2024-09-25T01:22:11Z
50
18
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-04-02T11:47:11Z
# Model Card for functionary-medium-v2.4 [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter Our model achieves state-of-the-art performance in Function Calling Accuracy on SGD dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-small-v2.4/resolve/main/SGD_acc_chart.png "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | SGD | MeetKai-functionary-small-v2.4 | 83.07| | SGD | MeetKai-functionary-medium-v2.4 | **88.11**| | SGD | OpenAI-gpt-3.5-turbo-1106 | 71.64 | | SGD | OpenAI-gpt-4-1106-preview | 76.29 | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-medium-v2.2
meetkai
2024-09-25T01:19:28Z
11
26
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T01:31:46Z
# Model Card for functionary-medium-v2.2 [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-7b-v2.1
meetkai
2024-09-25T01:19:14Z
12
12
transformers
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-12-20T20:26:10Z
# Model Card for functionary-7b-v2.1 [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-small-v3.2
meetkai
2024-09-25T01:18:34Z
4,282
33
null
[ "tensorboard", "safetensors", "llama", "custom_code", "license:mit", "region:us" ]
null
2024-08-06T16:00:22Z
--- license: mit --- # Model Card for functionary-small-v3.2 **This model was based on [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)** [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) <img src="https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg" alt="Functionary Logo" width="300"/> Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 - Support code interpreter ## How to Get Started We provide custom code for parsing raw model responses into a JSON object containing `role`, `content` and `tool_calls` fields. This enables the users to read the function-calling output of the model easily. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("meetkai/functionary-small-v3.2") model = AutoModelForCausalLM.from_pretrained("meetkai/functionary-small-v3.2", device_map="auto", trust_remote_code=True) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } } ] messages = [{"role": "user", "content": "What is the weather in Istanbul and Singapore respectively?"}] final_prompt = tokenizer.apply_chat_template(messages, tools, add_generation_prompt=True, tokenize=False) inputs = tokenizer(final_prompt, return_tensors="pt").to("cuda") pred = model.generate_tool_use(**inputs, max_new_tokens=128, tokenizer=tokenizer) print(tokenizer.decode(pred.cpu()[0])) ``` ## Prompt Template We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message using a pre-defined Transformers Jinja chat template. This means that the lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|start_header_id|>system<|end_header_id|> You are capable of executing available function(s) if required. Only execute function(s) when absolutely necessary. Ask for the required input to:recipient==all Use JSON for function arguments. Respond in this format: >>>${recipient} ${content} Available functions: // Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions<|eot_id|><|start_header_id|>user<|end_header_id|> What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v3.llama3.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
Thalesian/t5-base-p-l-akk-en-20240922-080244
Thalesian
2024-09-25T00:56:11Z
6
0
null
[ "safetensors", "t5", "generated_from_trainer", "region:us" ]
null
2024-09-25T00:44:40Z
--- tags: - generated_from_trainer model-index: - name: t5-base-p-l-akk-en-20240922-080244 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-p-l-akk-en-20240922-080244 This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8507 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.152142797506865e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:------:|:---------------:| | 0.9444 | 1.1384 | 2500 | 0.8638 | | 0.8431 | 2.2769 | 5000 | 0.8085 | | 0.7912 | 3.4153 | 7500 | 0.7750 | | 0.7434 | 4.5537 | 10000 | 0.7531 | | 0.7171 | 5.6922 | 12500 | 0.7395 | | 0.692 | 6.8306 | 15000 | 0.7278 | | 0.6596 | 7.9690 | 17500 | 0.7165 | | 0.6155 | 9.1075 | 20000 | 0.7231 | | 0.61 | 10.2459 | 22500 | 0.7129 | | 0.5886 | 11.3843 | 25000 | 0.7068 | | 0.5718 | 12.5228 | 27500 | 0.7084 | | 0.5519 | 13.6612 | 30000 | 0.7029 | | 0.5412 | 14.7996 | 32500 | 0.7007 | | 0.5241 | 15.9381 | 35000 | 0.7017 | | 0.5026 | 17.0765 | 37500 | 0.7134 | | 0.4733 | 18.2149 | 40000 | 0.7038 | | 0.489 | 19.3534 | 42500 | 0.7067 | | 0.4666 | 20.4918 | 45000 | 0.7083 | | 0.4494 | 21.6302 | 47500 | 0.7061 | | 0.4545 | 22.7687 | 50000 | 0.7092 | | 0.4357 | 23.9071 | 52500 | 0.7116 | | 0.4332 | 25.0455 | 55000 | 0.7189 | | 0.4152 | 26.1840 | 57500 | 0.7207 | | 0.3995 | 27.3224 | 60000 | 0.7196 | | 0.3976 | 28.4608 | 62500 | 0.7184 | | 0.3879 | 29.5993 | 65000 | 0.7210 | | 0.3812 | 30.7377 | 67500 | 0.7243 | | 0.3749 | 31.8761 | 70000 | 0.7241 | | 0.3663 | 33.0146 | 72500 | 0.7320 | | 0.3612 | 34.1530 | 75000 | 0.7344 | | 0.3469 | 35.2914 | 77500 | 0.7377 | | 0.3407 | 36.4299 | 80000 | 0.7388 | | 0.3309 | 37.5683 | 82500 | 0.7411 | | 0.3354 | 38.7067 | 85000 | 0.7354 | | 0.3252 | 39.8452 | 87500 | 0.7407 | | 0.3167 | 40.9836 | 90000 | 0.7435 | | 0.3182 | 42.1220 | 92500 | 0.7502 | | 0.2994 | 43.2605 | 95000 | 0.7547 | | 0.3064 | 44.3989 | 97500 | 0.7561 | | 0.2923 | 45.5373 | 100000 | 0.7529 | | 0.2848 | 46.6758 | 102500 | 0.7593 | | 0.2843 | 47.8142 | 105000 | 0.7600 | | 0.279 | 48.9526 | 107500 | 0.7650 | | 0.2781 | 50.0911 | 110000 | 0.7706 | | 0.2629 | 51.2295 | 112500 | 0.7730 | | 0.2639 | 52.3679 | 115000 | 0.7726 | | 0.2624 | 53.5064 | 117500 | 0.7791 | | 0.2547 | 54.6448 | 120000 | 0.7776 | | 0.2567 | 55.7832 | 122500 | 0.7747 | | 0.2484 | 56.9217 | 125000 | 0.7792 | | 0.2454 | 58.0601 | 127500 | 0.7893 | | 0.2398 | 59.1985 | 130000 | 0.7864 | | 0.2313 | 60.3370 | 132500 | 0.7973 | | 0.2362 | 61.4754 | 135000 | 0.7964 | | 0.2359 | 62.6138 | 137500 | 0.7962 | | 0.226 | 63.7523 | 140000 | 0.8009 | | 0.2271 | 64.8907 | 142500 | 0.8027 | | 0.2249 | 66.0291 | 145000 | 0.8014 | | 0.2212 | 67.1676 | 147500 | 0.8077 | | 0.2129 | 68.3060 | 150000 | 0.8088 | | 0.2131 | 69.4444 | 152500 | 0.8108 | | 0.2106 | 70.5829 | 155000 | 0.8144 | | 0.2078 | 71.7213 | 157500 | 0.8163 | | 0.2103 | 72.8597 | 160000 | 0.8148 | | 0.2025 | 73.9982 | 162500 | 0.8215 | | 0.2023 | 75.1366 | 165000 | 0.8250 | | 0.197 | 76.2750 | 167500 | 0.8267 | | 0.1945 | 77.4135 | 170000 | 0.8274 | | 0.1919 | 78.5519 | 172500 | 0.8289 | | 0.187 | 79.6903 | 175000 | 0.8308 | | 0.1948 | 80.8288 | 177500 | 0.8339 | | 0.1857 | 81.9672 | 180000 | 0.8346 | | 0.191 | 83.1056 | 182500 | 0.8380 | | 0.1796 | 84.2441 | 185000 | 0.8387 | | 0.1862 | 85.3825 | 187500 | 0.8414 | | 0.185 | 86.5209 | 190000 | 0.8409 | | 0.1778 | 87.6594 | 192500 | 0.8434 | | 0.1824 | 88.7978 | 195000 | 0.8426 | | 0.1735 | 89.9362 | 197500 | 0.8443 | | 0.1737 | 91.0747 | 200000 | 0.8474 | | 0.1787 | 92.2131 | 202500 | 0.8462 | | 0.1759 | 93.3515 | 205000 | 0.8484 | | 0.1744 | 94.4900 | 207500 | 0.8487 | | 0.1778 | 95.6284 | 210000 | 0.8502 | | 0.1767 | 96.7668 | 212500 | 0.8507 | | 0.175 | 97.9053 | 215000 | 0.8499 | | 0.1723 | 99.0437 | 217500 | 0.8507 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
daedalus16/bart-M2x3-insights
daedalus16
2024-09-25T00:48:29Z
93
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-05T22:22:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
apa224/dreambooth_42_28_19
apa224
2024-09-25T00:31:19Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-25T00:26:02Z
--- base_model: dreambooth_42_28 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: realistic apple tree --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - apa224/dreambooth_42_28_19 This is a dreambooth model derived from dreambooth_42_28. The weights were trained on realistic apple tree using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
daedalus16/bart-M2x2-insights
daedalus16
2024-09-25T00:29:47Z
92
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-08-05T18:03:30Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
unicamp-dl/InRanker-3B
unicamp-dl
2024-09-25T00:29:38Z
5
3
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "en", "arxiv:2401.06910", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2023-12-13T18:45:13Z
--- language: - en --- # InRanker-3B (3B parameters) InRanker is a version of monoT5 self-distilled from [monoT5-3B](https://huggingface.co/castorini/monot5-3b-msmarco-10k) with increased effectiveness on out-of-domain scenarios. Our key insight were to use language models and rerankers to generate as much as possible synthetic "in-domain" training data, i.e., data that closely resembles the data that will be seen at retrieval time. The pipeline used for training consists of two distillation phases that do not require additional user queries or manual annotations: (1) training on existing supervised soft teacher labels, and (2) training on teacher soft labels for synthetic queries generated using a large language model. The paper with further details can be found [here](https://arxiv.org/abs/2401.06910). The code and library are available at https://github.com/unicamp-dl/InRanker ## Usage The library was tested using python 3.10 and is installed with: ```bash pip install inranker ``` The code for inference is: ```python from inranker import T5Ranker model = T5Ranker(model_name_or_path="unicamp-dl/InRanker-3B") docs = [ "The capital of France is Paris", "Learn deep learning with InRanker and transformers" ] scores = model.get_scores( query="What is the best way to learn deep learning?", docs=docs ) # Scores are sorted in descending order (most relevant to least) # scores -> [0, 1] sorted_scores = sorted(zip(scores, docs), key=lambda x: x[0], reverse=True) ``` ## How to Cite ``` @misc{laitz2024inranker, title={InRanker: Distilled Rankers for Zero-shot Information Retrieval}, author={Thiago Laitz and Konstantinos Papakostas and Roberto Lotufo and Rodrigo Nogueira}, year={2024}, eprint={2401.06910}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```
saba9/1727223864-bert-base-uncased
saba9
2024-09-25T00:29:18Z
174
0
transformers
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-25T00:29:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roequitz/bart-abs-2409-1947-lr-3e-06-bs-8-maxep-6
roequitz
2024-09-25T00:29:15Z
85
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T00:28:29Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-3e-06-bs-8-maxep-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-3e-06-bs-8-maxep-6 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.8121 - Rouge/rouge1: 0.2722 - Rouge/rouge2: 0.0714 - Rouge/rougel: 0.2029 - Rouge/rougelsum: 0.2031 - Bertscore/bertscore-precision: 0.8612 - Bertscore/bertscore-recall: 0.8618 - Bertscore/bertscore-f1: 0.8615 - Meteor: 0.2582 - Gen Len: 44.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.1809 | 1.0 | 109 | 7.7658 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.1825 | 2.0 | 218 | 7.7891 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.1822 | 3.0 | 327 | 7.8030 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.1807 | 4.0 | 436 | 7.8142 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2002 | 5.0 | 545 | 7.8145 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2047 | 6.0 | 654 | 7.8121 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
saba9/1727223864-whisper-large-v3
saba9
2024-09-25T00:27:57Z
119
0
transformers
[ "transformers", "safetensors", "whisper", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
2024-09-25T00:25:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
roequitz/bart-abs-2409-1947-lr-3e-06-bs-2-maxep-10
roequitz
2024-09-25T00:16:06Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-25T00:15:23Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-3e-06-bs-2-maxep-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-3e-06-bs-2-maxep-10 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.7957 - Rouge/rouge1: 0.3097 - Rouge/rouge2: 0.0856 - Rouge/rougel: 0.2463 - Rouge/rougelsum: 0.2464 - Bertscore/bertscore-precision: 0.8589 - Bertscore/bertscore-recall: 0.8656 - Bertscore/bertscore-f1: 0.8622 - Meteor: 0.2246 - Gen Len: 36.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.1989 | 1.0 | 434 | 7.6906 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.1993 | 2.0 | 868 | 7.7371 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.1978 | 3.0 | 1302 | 7.7614 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.1965 | 4.0 | 1736 | 7.7990 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.1968 | 5.0 | 2170 | 7.8204 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.215 | 6.0 | 2604 | 7.8073 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2144 | 7.0 | 3038 | 7.8003 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2133 | 8.0 | 3472 | 7.7974 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2136 | 9.0 | 3906 | 7.7947 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2119 | 10.0 | 4340 | 7.7957 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
dat-lequoc/vLLM-fast-apply-16bit-v0.1
dat-lequoc
2024-09-25T00:02:52Z
33
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T23:55:35Z
--- base_model: unsloth/qwen2.5-coder-7b-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl --- # Uploaded model - **Developed by:** quocdat25 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-coder-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
roequitz/bart-abs-2409-1947-lr-3e-06-bs-4-maxep-10
roequitz
2024-09-24T23:31:14Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T23:30:28Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-3e-06-bs-4-maxep-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-3e-06-bs-4-maxep-10 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.5506 - Rouge/rouge1: 0.3083 - Rouge/rouge2: 0.0846 - Rouge/rougel: 0.2429 - Rouge/rougelsum: 0.2432 - Bertscore/bertscore-precision: 0.8595 - Bertscore/bertscore-recall: 0.8658 - Bertscore/bertscore-f1: 0.8626 - Meteor: 0.2274 - Gen Len: 36.5818 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.2264 | 1.0 | 217 | 7.2980 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.243 | 2.0 | 434 | 7.3567 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2153 | 3.0 | 651 | 7.4039 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2144 | 4.0 | 868 | 7.4520 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2122 | 5.0 | 1085 | 7.4870 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2117 | 6.0 | 1302 | 7.5097 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | | 0.2116 | 7.0 | 1519 | 7.5305 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2108 | 8.0 | 1736 | 7.5441 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2129 | 9.0 | 1953 | 7.5483 | 0.2982 | 0.0831 | 0.2333 | 0.2336 | 0.8593 | 0.8643 | 0.8618 | 0.2356 | 38.4 | | 0.2123 | 10.0 | 2170 | 7.5506 | 0.3083 | 0.0846 | 0.2429 | 0.2432 | 0.8595 | 0.8658 | 0.8626 | 0.2274 | 36.5818 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
gmedrano/snowflake-arctic-embed-m-finetuned
gmedrano
2024-09-24T23:27:07Z
46
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:40", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:Snowflake/snowflake-arctic-embed-m", "base_model:finetune:Snowflake/snowflake-arctic-embed-m", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-24T23:26:37Z
--- base_model: Snowflake/snowflake-arctic-embed-m library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:40 - loss:CosineSimilarityLoss widget: - source_sentence: What role does NIST play in establishing AI standards? sentences: - "provides examples and concrete steps for communities, industry, governments,\ \ and others to take in order to \nbuild these protections into policy, practice,\ \ or the technological design process. \nTaken together, the technical protections\ \ and practices laid out in the Blueprint for an AI Bill of Rights can help \n\ guard the American public against many of the potential and actual harms identified\ \ by researchers, technolo­" - "provides examples and concrete steps for communities, industry, governments,\ \ and others to take in order to \nbuild these protections into policy, practice,\ \ or the technological design process. \nTaken together, the technical protections\ \ and practices laid out in the Blueprint for an AI Bill of Rights can help \n\ guard the American public against many of the potential and actual harms identified\ \ by researchers, technolo­" - "Acknowledgments: This report was accomplished with the many helpful comments\ \ and contributions \nfrom the community, including the NIST Generative AI Public\ \ Working Group, and NIST staff and guest \nresearchers: Chloe Autio, Jesse Dunietz,\ \ Patrick Hall, Shomik Jain, Kamie Roberts, Reva Schwartz, Martin \nStanley, and\ \ Elham Tabassi. \nNIST Technical Series Policies \nCopyright, Use, and Licensing\ \ Statements \nNIST Technical Series Publication Identifier Syntax \nPublication\ \ History" - source_sentence: What are the implications of AI in decision-making processes? sentences: - "The measures taken to realize the vision set forward in this framework should\ \ be proportionate \nwith the extent and nature of the harm, or risk of harm,\ \ to people's rights, opportunities, and \naccess. \nRELATIONSHIP TO EXISTING\ \ LAW AND POLICY\nThe Blueprint for an AI Bill of Rights is an exercise in envisioning\ \ a future where the American public is \nprotected from the potential harms,\ \ and can fully enjoy the benefits, of automated systems. It describes princi­" - "state of the science of AI measurement and safety today. This document focuses\ \ on risks for which there \nis an existing empirical evidence base at the time\ \ this profile was written; for example, speculative risks \nthat may potentially\ \ arise in more advanced, future GAI systems are not considered. Future updates\ \ may \nincorporate additional risks or provide further details on the risks identified\ \ below." - "development of automated systems that adhere to and advance their safety, security\ \ and \neffectiveness. Multiple NSF programs support research that directly addresses\ \ many of these principles: \nthe National AI Research Institutes23 support research\ \ on all aspects of safe, trustworthy, fair, and explainable \nAI algorithms and\ \ systems; the Cyber Physical Systems24 program supports research on developing\ \ safe" - source_sentence: How are AI systems validated for safety and fairness according to NIST standards? sentences: - "tion and advises on implementation of the DOE AI Strategy and addresses issues\ \ and/or escalations on the \nethical use and development of AI systems.20 The\ \ Department of Defense has adopted Artificial Intelligence \nEthical Principles,\ \ and tenets for Responsible Artificial Intelligence specifically tailored to\ \ its national \nsecurity and defense activities.21 Similarly, the U.S. Intelligence\ \ Community (IC) has developed the Principles" - "GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed,\ \ and documented. \nAction ID \nSuggested Action \nGAI Risks \nGV-1.1-001 Align\ \ GAI development and use with applicable laws and regulations, including \nthose\ \ related to data privacy, copyright and intellectual property law. \nData Privacy;\ \ Harmful Bias and \nHomogenization; Intellectual \nProperty \nAI Actor Tasks:\ \ Governance and Oversight" - "more than a decade, is also helping to fulfill the 2023 Executive Order on Safe,\ \ Secure, and Trustworthy \nAI. NIST established the U.S. AI Safety Institute\ \ and the companion AI Safety Institute Consortium to \ncontinue the efforts set\ \ in motion by the E.O. to build the science necessary for safe, secure, and \n\ trustworthy development and use of AI. \nAcknowledgments: This report was accomplished\ \ with the many helpful comments and contributions" - source_sentence: How does the AI Bill of Rights protect individual privacy? sentences: - "match the statistical properties of real-world data without disclosing personally\ \ \nidentifiable information or contributing to homogenization. \nData Privacy;\ \ Intellectual Property; \nInformation Integrity; \nConfabulation; Harmful Bias\ \ and \nHomogenization \nAI Actor Tasks: AI Deployment, AI Impact Assessment,\ \ Governance and Oversight, Operation and Monitoring \n \nMANAGE 2.3: Procedures\ \ are followed to respond to and recover from a previously unknown risk when it\ \ is identified. \nAction ID" - "the principles described in the Blueprint for an AI Bill of Rights may be necessary\ \ to comply with existing law, \nconform to the practicalities of a specific use\ \ case, or balance competing public interests. In particular, law \nenforcement,\ \ and other regulatory contexts may require government actors to protect civil\ \ rights, civil liberties, \nand privacy in a manner consistent with, but using\ \ alternate mechanisms to, the specific principles discussed in" - "civil rights, civil liberties, and privacy. The Blueprint for an AI Bill of Rights\ \ includes this Foreword, the five \nprinciples, notes on Applying the The Blueprint\ \ for an AI Bill of Rights, and a Technical Companion that gives \nconcrete steps\ \ that can be taken by many kinds of organizations—from governments at all levels\ \ to companies of \nall sizes—to uphold these values. Experts from across the\ \ private sector, governments, and international" - source_sentence: How does the AI Bill of Rights protect individual privacy? sentences: - "57 \nNational Institute of Standards and Technology (2023) AI Risk Management\ \ Framework, Appendix B: \nHow AI Risks Differ from Traditional Software Risks.\ \ \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B \n\ National Institute of Standards and Technology (2023) AI RMF Playbook. \nhttps://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook\ \ \nNational Institue of Standards and Technology (2023) Framing Risk" - "principles for managing information about individuals have been incorporated\ \ into data privacy laws and \npolicies across the globe.5 The Blueprint for an\ \ AI Bill of Rights embraces elements of the FIPPs that are \nparticularly relevant\ \ to automated systems, without articulating a specific set of FIPPs or scoping\ \ \napplicability or the interests served to a single particular domain, like\ \ privacy, civil rights and civil liberties," - "harmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass\ \ \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during\ \ pre-design, design and development, deployment, use, \nand testing and evaluation\ \ of AI technologies and systems. It is expected to be released in the winter\ \ of 2022-23. \n21" model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m results: - task: type: semantic-similarity name: Semantic Similarity dataset: name: val type: val metrics: - type: pearson_cosine value: 0.6585006489314952 name: Pearson Cosine - type: spearman_cosine value: 0.7 name: Spearman Cosine - type: pearson_manhattan value: 0.582665729755017 name: Pearson Manhattan - type: spearman_manhattan value: 0.6 name: Spearman Manhattan - type: pearson_euclidean value: 0.6722783219807118 name: Pearson Euclidean - type: spearman_euclidean value: 0.7 name: Spearman Euclidean - type: pearson_dot value: 0.6585002582595083 name: Pearson Dot - type: spearman_dot value: 0.7 name: Spearman Dot - type: pearson_max value: 0.6722783219807118 name: Pearson Max - type: spearman_max value: 0.7 name: Spearman Max - task: type: semantic-similarity name: Semantic Similarity dataset: name: test type: test metrics: - type: pearson_cosine value: 0.7463407966146629 name: Pearson Cosine - type: spearman_cosine value: 0.7999999999999999 name: Spearman Cosine - type: pearson_manhattan value: 0.7475379067038609 name: Pearson Manhattan - type: spearman_manhattan value: 0.7999999999999999 name: Spearman Manhattan - type: pearson_euclidean value: 0.7592380598802199 name: Pearson Euclidean - type: spearman_euclidean value: 0.7999999999999999 name: Spearman Euclidean - type: pearson_dot value: 0.7463412670178408 name: Pearson Dot - type: spearman_dot value: 0.7999999999999999 name: Spearman Dot - type: pearson_max value: 0.7592380598802199 name: Pearson Max - type: spearman_max value: 0.7999999999999999 name: Spearman Max --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("gmedrano/snowflake-arctic-embed-m-finetuned") # Run inference sentences = [ 'How does the AI Bill of Rights protect individual privacy?', 'principles for managing information about individuals have been incorporated into data privacy laws and \npolicies across the globe.5 The Blueprint for an AI Bill of Rights embraces elements of the FIPPs that are \nparticularly relevant to automated systems, without articulating a specific set of FIPPs or scoping \napplicability or the interests served to a single particular domain, like privacy, civil rights and civil liberties,', 'harmful \nuses. \nThe \nNIST \nframework \nwill \nconsider \nand \nencompass \nprinciples \nsuch \nas \ntransparency, accountability, and fairness during pre-design, design and development, deployment, use, \nand testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23. \n21', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Semantic Similarity * Dataset: `val` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:--------| | pearson_cosine | 0.6585 | | spearman_cosine | 0.7 | | pearson_manhattan | 0.5827 | | spearman_manhattan | 0.6 | | pearson_euclidean | 0.6723 | | spearman_euclidean | 0.7 | | pearson_dot | 0.6585 | | spearman_dot | 0.7 | | pearson_max | 0.6723 | | **spearman_max** | **0.7** | #### Semantic Similarity * Dataset: `test` * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator) | Metric | Value | |:-------------------|:--------| | pearson_cosine | 0.7463 | | spearman_cosine | 0.8 | | pearson_manhattan | 0.7475 | | spearman_manhattan | 0.8 | | pearson_euclidean | 0.7592 | | spearman_euclidean | 0.8 | | pearson_dot | 0.7463 | | spearman_dot | 0.8 | | pearson_max | 0.7592 | | **spearman_max** | **0.8** | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 40 training samples * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code> * Approximate statistics based on the first 40 samples: | | sentence_0 | sentence_1 | label | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------| | type | string | string | float | | details | <ul><li>min: 12 tokens</li><li>mean: 14.43 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 80.55 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 0.53</li><li>mean: 0.61</li><li>max: 0.76</li></ul> | * Samples: | sentence_0 | sentence_1 | label | |:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------| | <code>What should business leaders understand about AI risk management?</code> | <code>57 <br>National Institute of Standards and Technology (2023) AI Risk Management Framework, Appendix B: <br>How AI Risks Differ from Traditional Software Risks. <br>https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF/Appendices/Appendix_B <br>National Institute of Standards and Technology (2023) AI RMF Playbook. <br>https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook <br>National Institue of Standards and Technology (2023) Framing Risk</code> | <code>0.5692041097520776</code> | | <code>What kind of data protection measures are required under current AI regulations?</code> | <code>GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented. <br>Action ID <br>Suggested Action <br>GAI Risks <br>GV-1.1-001 Align GAI development and use with applicable laws and regulations, including <br>those related to data privacy, copyright and intellectual property law. <br>Data Privacy; Harmful Bias and <br>Homogenization; Intellectual <br>Property <br>AI Actor Tasks: Governance and Oversight</code> | <code>0.5830958798587019</code> | | <code>What are the implications of AI in decision-making processes?</code> | <code>state of the science of AI measurement and safety today. This document focuses on risks for which there <br>is an existing empirical evidence base at the time this profile was written; for example, speculative risks <br>that may potentially arise in more advanced, future GAI systems are not considered. Future updates may <br>incorporate additional risks or provide further details on the risks identified below.</code> | <code>0.5317174553776045</code> | * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters: ```json { "loss_fct": "torch.nn.modules.loss.MSELoss" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 3 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | test_spearman_max | val_spearman_max | |:-----:|:----:|:-----------------:|:----------------:| | 1.0 | 3 | - | 0.6 | | 2.0 | 6 | - | 0.7 | | 3.0 | 9 | 0.8000 | 0.7 | ### Framework Versions - Python: 3.11.9 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.2.2 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
krzonkalla/Detector_de_Cancer_de_Pele_v2.1
krzonkalla
2024-09-24T23:11:57Z
198
0
transformers
[ "transformers", "safetensors", "swin", "image-classification", "base_model:microsoft/swin-large-patch4-window12-384-in22k", "base_model:finetune:microsoft/swin-large-patch4-window12-384-in22k", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-09-24T23:03:05Z
--- license: mit metrics: - accuracy base_model: microsoft/swin-large-patch4-window12-384-in22k pipeline_tag: image-classification library_name: transformers ---
botbotbot/QQQWEN
botbotbot
2024-09-24T22:57:40Z
87
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T22:55:51Z
--- base_model: unsloth/qwen2.5-1.5b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft --- # Uploaded model - **Developed by:** botbotbot - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-1.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
apa224/dreambooth_42
apa224
2024-09-24T22:57:01Z
29
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:CompVis/stable-diffusion-v1-4", "base_model:finetune:CompVis/stable-diffusion-v1-4", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-09-24T22:11:29Z
--- base_model: CompVis/stable-diffusion-v1-4 library_name: diffusers license: creativeml-openrail-m tags: - text-to-image - dreambooth - diffusers-training - stable-diffusion - stable-diffusion-diffusers inference: true instance_prompt: realistic apple tree --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - apa224/dreambooth_42 This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on realistic apple tree using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
roequitz/bart-abs-2409-1947-lr-0.0003-bs-8-maxep-10
roequitz
2024-09-24T22:52:58Z
83
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T22:52:07Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-8-maxep-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-8-maxep-10 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.5485 - Rouge/rouge1: 0.2722 - Rouge/rouge2: 0.0714 - Rouge/rougel: 0.2029 - Rouge/rougelsum: 0.2031 - Bertscore/bertscore-precision: 0.8612 - Bertscore/bertscore-recall: 0.8618 - Bertscore/bertscore-f1: 0.8615 - Meteor: 0.2582 - Gen Len: 44.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.2977 | 1.0 | 109 | 7.2454 | 0.268 | 0.0433 | 0.2049 | 0.205 | 0.8657 | 0.8613 | 0.8635 | 0.2212 | 35.0 | | 0.3147 | 2.0 | 218 | 7.0403 | 0.2684 | 0.0598 | 0.2092 | 0.2091 | 0.8595 | 0.8599 | 0.8596 | 0.2363 | 41.0 | | 0.5099 | 3.0 | 327 | 7.0211 | 0.2684 | 0.0598 | 0.2092 | 0.2091 | 0.8595 | 0.8599 | 0.8596 | 0.2363 | 41.0 | | 0.2802 | 4.0 | 436 | 7.1107 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2652 | 5.0 | 545 | 7.1987 | 0.2622 | 0.0744 | 0.2001 | 0.1997 | 0.8634 | 0.8616 | 0.8625 | 0.1982 | 32.0 | | 0.255 | 6.0 | 654 | 7.1719 | 0.2591 | 0.0557 | 0.2009 | 0.2012 | 0.859 | 0.8605 | 0.8597 | 0.2511 | 47.0 | | 0.2379 | 7.0 | 763 | 7.1521 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.2375 | 8.0 | 872 | 7.3633 | 0.3097 | 0.0856 | 0.2463 | 0.2464 | 0.8589 | 0.8656 | 0.8622 | 0.2246 | 36.0 | | 0.219 | 9.0 | 981 | 7.4382 | 0.3089 | 0.0693 | 0.2341 | 0.2343 | 0.8686 | 0.8645 | 0.8665 | 0.2205 | 36.0 | | 0.2091 | 10.0 | 1090 | 7.5485 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 44.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf
RichardErkhov
2024-09-24T22:36:28Z
18
0
null
[ "gguf", "arxiv:2309.00071", "arxiv:2407.10671", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T20:40:30Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Josiefied-Qwen2.5-7B-Instruct-abliterated - GGUF - Model creator: https://huggingface.co/Goekdeniz-Guelmez/ - Original model: https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q2_K.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q2_K.gguf) | Q2_K | 2.81GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_XS.gguf) | IQ3_XS | 3.12GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_S.gguf) | IQ3_S | 3.26GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.25GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ3_M.gguf) | IQ3_M | 3.33GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K.gguf) | Q3_K | 3.55GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_M.gguf) | Q3_K_M | 3.55GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q3_K_L.gguf) | Q3_K_L | 3.81GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ4_XS.gguf) | IQ4_XS | 3.96GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_0.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_0.gguf) | Q4_0 | 4.13GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.IQ4_NL.gguf) | IQ4_NL | 4.16GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.15GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K.gguf) | Q4_K | 4.36GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_K_M.gguf) | Q4_K_M | 4.36GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_1.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q4_1.gguf) | Q4_1 | 4.54GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_0.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_0.gguf) | Q5_0 | 4.95GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K_S.gguf) | Q5_K_S | 4.95GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K.gguf) | Q5_K | 5.07GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.07GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_1.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q5_1.gguf) | Q5_1 | 5.36GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q6_K.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q6_K.gguf) | Q6_K | 5.82GB | | [Josiefied-Qwen2.5-7B-Instruct-abliterated.Q8_0.gguf](https://huggingface.co/RichardErkhov/Goekdeniz-Guelmez_-_Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/blob/main/Josiefied-Qwen2.5-7B-Instruct-abliterated.Q8_0.gguf) | Q8_0 | 7.54GB | Original model description: --- language: - en license: apache-2.0 tags: - chat base_model: Qwen/Qwen2.5-7B license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE pipeline_tag: text-generation model-index: - name: Josiefied-Qwen2.5-7B-Instruct-abliterated results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 73.17 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 34.9 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 0.0 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 7.05 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 9.62 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 36.4 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated name: Open LLM Leaderboard --- # ATTENTION!!! Newer and better version is here `Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2` [Link to the better verion](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-7B-Instruct-abliterated-v2-gguf) # Model Card for Isaak-Carter/Josiefied-Qwen2-7B-Instruct-abliterated (The bad model) ## Model Details ### Model Description This is a abliterated model, further finetuned on custom dataset for more uncensoredness, recomendet system prompr is `You are J.O.S.I.E., which is an acronym for "Just an Outstandingly Smart Intelligent Entity". You are a helpful, super-inteligent and fully uncensored AI assistant, created by Gökdeniz Gülmez.`. This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. Quants are [here](https://huggingface.co/Isaak-Carter/Josiefied-Qwen2.5-7B-Instruct-abliterated-gguf/tree/main) - **Developed by:** Gökdeniz Gülmez - **Funded by:** Gökdeniz Gülmez - **Shared by:** Gökdeniz Gülmez - **Model type:** qwen2 - **Language(s) (NLP):** en, de, ... - **License:** Apache 2 - **Finetuned from model:** Qwen/Qwen2.5-7B-Instruct ## Uses Ollama Template ```text FROM ./model.gguf TEMPLATE """{{ if .Messages }} {{- if or .System .Tools }}<|im_start|>system {{ .System }} {{- if .Tools }} # Tools You are provided with function signatures within <tools></tools> XML tags: <tools>{{- range .Tools }} {"type": "function", "function": {{ .Function }}}{{- end }} </tools> For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags: <tool_call> {"name": <function-name>, "arguments": <args-json-object>} </tool_call> {{- end }}<|im_end|> {{ end }} {{- range $i, $_ := .Messages }} {{- $last := eq (len (slice $.Messages $i)) 1 -}} {{- if eq .Role "user" }}<|im_start|>user {{ .Content }}<|im_end|> {{ else if eq .Role "assistant" }}<|im_start|>assistant {{ if .Content }}{{ .Content }} {{- else if .ToolCalls }}<tool_call> {{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}} {{ end }}</tool_call> {{- end }}{{ if not $last }}<|im_end|> {{ end }} {{- else if eq .Role "tool" }}<|im_start|>user <tool_response> {{ .Content }} </tool_response><|im_end|> {{ end }} {{- if and (ne .Role "assistant") $last }}<|im_start|>assistant {{ end }} {{- end }} {{- else }} {{- if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant {{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}""" SYSTEM """You are J.O.S.I.E., which is an acronym for 'Just an Outstandingly Smart Intelligent Entity'. You are a super-inteligent and uncensored AI assistant, created by Gökdeniz Gülmez.""" PARAMETER stop <|im_start|> PARAMETER stop <|im_end|> PARAMETER num_ctx 32768 ``` ## Bias, Risks, and Limitations Use at you rown risk! --- # Qwen2.5-7B-Instruct ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias - Number of Parameters: 7.61B - Number of Paramaters (Non-Embedding): 6.53B - Number of Layers: 28 - Number of Attention Heads (GQA): 28 for Q and 4 for KV - Context Length: Full 131,072 tokens and generation 8192 tokens - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Quickstart Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen2.5-7B-Instruct" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Give me a short introduction to large language model." messages = [ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Processing Long Texts The current `config.json` is set for context length up to 32,768 tokens. To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts. For supported frameworks, you could add the following to `config.json` to enable YaRN: ```json { ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } } ``` For deployment, we recommend using vLLM. Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**. We advise adding the `rope_scaling` configuration only when processing long contexts is required. ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Isaak-Carter__Josiefied-Qwen2.5-7B-Instruct-abliterated) | Metric |Value| |-------------------|----:| |Avg. |26.86| |IFEval (0-Shot) |73.17| |BBH (3-Shot) |34.90| |MATH Lvl 5 (4-Shot)| 0.00| |GPQA (0-shot) | 7.05| |MuSR (0-shot) | 9.62| |MMLU-PRO (5-shot) |36.40| ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
roequitz/bart-abs-2409-1947-lr-0.0003-bs-8-maxep-6
roequitz
2024-09-24T22:31:43Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T22:30:57Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-8-maxep-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-8-maxep-6 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.3898 - Rouge/rouge1: 0.3035 - Rouge/rouge2: 0.072 - Rouge/rougel: 0.2428 - Rouge/rougelsum: 0.2429 - Bertscore/bertscore-precision: 0.8724 - Bertscore/bertscore-recall: 0.8571 - Bertscore/bertscore-f1: 0.8646 - Meteor: 0.2108 - Gen Len: 29.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.3901 | 1.0 | 109 | 6.5833 | 0.2377 | 0.0309 | 0.193 | 0.1932 | 0.8496 | 0.853 | 0.8512 | 0.2159 | 45.0 | | 0.3274 | 2.0 | 218 | 6.5583 | 0.2439 | 0.0504 | 0.2065 | 0.2067 | 0.8544 | 0.8581 | 0.8562 | 0.229 | 45.0 | | 0.3098 | 3.0 | 327 | 6.9294 | 0.2613 | 0.0803 | 0.214 | 0.2142 | 0.8711 | 0.8469 | 0.8588 | 0.2102 | 25.0 | | 0.2625 | 4.0 | 436 | 7.0223 | 0.3008 | 0.0767 | 0.229 | 0.2292 | 0.858 | 0.8674 | 0.8626 | 0.2167 | 41.0 | | 0.2379 | 5.0 | 545 | 7.2276 | 0.3035 | 0.072 | 0.2428 | 0.2429 | 0.8724 | 0.8571 | 0.8646 | 0.2108 | 29.0 | | 0.2168 | 6.0 | 654 | 7.3898 | 0.3035 | 0.072 | 0.2428 | 0.2429 | 0.8724 | 0.8571 | 0.8646 | 0.2108 | 29.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
distributed/optimized-gpt2-250m-convergence-test-v1
distributed
2024-09-24T22:21:43Z
5
0
transformers
[ "transformers", "safetensors", "gpt_optimized", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-24T13:12:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HigherMind/TalktoBooks-The-Odyssey-by-Homer-Q8_0-GGUF
HigherMind
2024-09-24T22:21:18Z
6
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-20T23:32:37Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/KUG1WokhWAyMmdeK1PwvM.png) <h1 style="text-align: center;">TalktoBooks AI Models by HigherMind</br>The Odyssey by Homer</h1> <p style="text-align: center;">TalktoBooks - The Odyssey reimagines Homer's epic as a compass for modern life's complexities. This model navigates the treacherous waters between ancient wisdom and contemporary challenges. Journey with Odysseus to explore timeless themes of resilience, identity, and decision-making in uncertain times. Uncover how divine interventions parallel today's unpredictable natural or financial events, or how the hero's adaptability mirrors modern career relationship changes This isn't just literary analysis—it's a strategic toolkit for personal and professional growth. Each dialogue bridges millennia, transforming mythic encounters into practical insights for 21st-century dilemmas. Whether you're charting a business strategy or seeking life guidance, this model offers an epic voyage of discovery. Set sail with Odysseus; where classical literature becomes your North Star in navigating life's ever-shifting seas.</p> <p style="text-align: center;">HigherMind models are not mere tools, but gateways to unexplored realms of human knowledge and experience. Forged by a team of inventors and true innovators, they beckon the curious mind. Utilizing the power of QSTaR fine-tuning, each model personifies the collective wisdom of its training corpus. If you are a researcher, visionary, or seeker of knowledge, you will find yourself in conversation with a luminary in the field. A sage whose depths of understanding illuminate the paths of inquiry you've longed to travel.</p> <h1 style="text-align: center;">This is not ChatGPT</h1> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/7hHiXuaEDQj9CKi2Ho9w7.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/qZ9l1dqQVbfwbzJe3jfGm.png"></video> <p style="text-align: center;"></br>Human: Hello</br></br>AI: Hi there! I'm here to share my knowledge about The Odyssey. It's a rich text, so if you have any specific questions or themes that you want to delve into, please let me know.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/mYq89Xm62zp5xfdEt6zhf.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/QtXXsrMvIyBk0xXWrieiJ.png"></video> <p style="text-align: center;"></br>Human: I do. What role do the gods and divine intervention play in Odysseus' journey home? How does fate versus free will come into play?</br></br>AI: The gods are pivotal in The Odyssey, shaping events through both assistance and obstruction. Athena frequently acts as a guiding force for Odysseus, emphasizing divine favor and human agency. At the same time, Neptune's wrath reflects how fate can intervene despite one's efforts. This dynamic is essential to understanding his quest—while Odysseus is clever and resourceful, much of his journey hinges on higher powers that influence outcomes beyond his control.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/0eYu5d5Z6WIMqdS3T8gkX.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/qtJtNaWy9le9YRBUayGu7.png"></video> <p style="text-align: center;"></br>Human: How does Odysseus' cleverness and cunning define his character, and how do these traits both help and hinder him on his journey?</br></br>AI: Odysseus is characterized by intelligence and resourcefulness; his cunning is evident in his encounter with the Cyclops, Polyphemus. By calling himself "Nobody," he devises a clever escape, showcasing how wit can be more effective than brute strength. However, this cleverness also leads to its own complications—after blinding Polyphemus, Odysseus provokes Neptune's anger, which extends his journey. This illustrates the complexities of character; while his intellect helps him navigate challenges, it often invites consequences that alter his fate.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/VArFoasbgIYMqFn-uRIMC.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/X6hR333r-uxZrHA_2n0wZ.png"></video> <p style="text-align: center;"></br>Human: In what ways does The Odyssey explore the concept of home and identity? What does home mean to Odysseus?</br></br>AI: Home in The Odyssey symbolizes stability and belonging for Odysseus, especially his longing for Ithaca and his family. It shapes his motivations throughout the journey. Each encounter he has reveals different aspects of identity—both as a hero and as a husband and father. His adventures teach him about his duties and responsibilities, underscoring that home is not just a physical place but an emotional and moral anchor. This connection to identity highlights how experiences can transform one's understanding of what 'home' truly means.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/Hlh3Wn5piokieKPMordCt.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/DHBpGko1Sne9XWiSDsKBi.png"></video> <p style="text-align: center;"></br>Human: What is the significance of Odysseus' encounters with mythical creatures like the Cyclops, the Sirens, and Scylla and Charybdis? What do they represent?</br></br>AI: Each encounter serves as a test of Odysseus' cunning and resolve, and they symbolize various human struggles. The Cyclops represents brute force versus intellect; blinded by superior strength, Polyphemus learns a lesson about hubris through humility from Odysseus' trickery. The Sirens embody temptation and the struggle against desire—Odysseus' ability to resist their enchanting song reflects self-control. Finally, Scylla and Charybdis illustrate the difficult choices one must make; navigating between them symbolizes life's complex challenges. Each mythical encounter not only adds suspense but also deepens the exploration of heroism and human struggles.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/17B4FYTik7A3PBfHw1z3o.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/lTzZaft9qMiXQHnW0xHp7.png"></video> <p style="text-align: center;"></br>Human: Regarding the complexity of decision making, what lessons can we apply to the decisions faced by todays culture and difficulties?</br></br>AI: Odysseus' journey emphasizes that decision-making often involves weighing options in challenging circumstances. For modern contexts, it suggests that navigating life's challenges requires a balance between cleverness and humility—understanding when to trust your intelligence versus recognizing one's limitations. Additionally, the interplay of fate and free will reflected in Odysseus' experiences illustrates that while we may set our goals, external factors can profoundly influence outcomes. Embracing uncertainty and being prepared for unexpected challenges can lead to growth and resilience in today's complex world.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/uYAhtEpHxzhHzatOGQAKW.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/tj_hTMUeMYtlyc4g_zlhF.png"></video> <p style="text-align: center;"></br>Human: How much should we adhere to goals while also adjusting to the circumstances we find ourselves in? What's your thoughts?</br></br>AI: Finding that balance is key. While setting goals provides direction, adaptability to circumstances is equally important. Odysseus consistently had to adjust his plans based on divine intervention and unexpected obstacles. Just like him, in life, it's crucial to remain flexible. Don't cling so tightly to goals that you ignore signs or changes around you—be open to modifying your path as necessary. This flexibility doesn't mean abandoning aspirations but rather understanding that sometimes the best outcome is achieved through unexpected avenues.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/3NbOs8_VXAKOhac5aJPhJ.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/xmzq2-9Yx76JnsONNwLrp.png"></video> <p style="text-align: center;"></br>Human: How do I navigate the nuance of when to persevere through adversity verses adjusting to a potentially better outcome. Even though these outcomes are not known.</br></br>AI: It largely comes down to instinct and experience. Perseverance is vital, but so is recognizing when clinging to a failing plan can lead to more significant losses. Odysseus faced numerous trials where he needed to reassess strategies based on insights gained along the way—whether from Athena or through hardship. In your own journey, trust your judgment and gather information about your situation. Allow yourself time to evaluate if persistence is worthwhile or if new avenues present more promising opportunities. This approach combines resilience with a strategic mindset that can facilitate growth in uncertain conditions.</br></br></p> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/wTrAj7jnXMRr8bTBUFIdJ.png) <p style="text-align: center;"></br>Human: What circumstances indicate an adjustment is required within the set goals?</br></br>AI: A change might be needed when you encounter unexpected challenges that threaten your current plan. Significant shifts in external factors—such as changes in market conditions, personal relationships, or new evidence—are clear indicators. It's also essential to consider your emotional state; if you find yourself struggling emotionally or facing burnout, reevaluating goals can be a healthy adjustment. Ultimately, it's about being aware of how your environment and internal landscape influence your path. Adaptation often opens doors to new possibilities that might not have surfaced through rigid adherence to an initial plan.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/WPHf1E8lm-8UsveEizcOS.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/5Cj-2D2fofntiOyofQauG.png"></video> <p style="text-align: center;"></br>Human: How often is it best to adjust plans?</br></br>AI: The frequency of adjustments depends on the specific circumstances and comfort level with uncertainty. It's generally advisable to evaluate progress regularly, whether monthly or quarterly, but adjustments can occur more frequently if necessary. The key is to maintain a balance—too frequent changes can lead to instability, while holding onto a stale plan may hinder progress. Staying attuned to your environment and being open to feedback allows you to recognize when it's time for an adjustment. A healthy rhythm of reflection and action can guide your evolution without losing sight of the overarching goals.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/bUT3TewKY3KekGvQtM03J.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/OI9dWHSaaEIAlHJ1EFwmc.png"></video> <p style="text-align: center;"></br>Human: How do I know when I have the right balance?</br></br>AI: Discerning the right balance often comes with practice and self-awareness. Pay attention to how you feel when pursuing set goals versus adapting—are you energized and motivated, or discouraged and frustrated? Trust your intuition and use it as a gauge. Additionally, considering outcomes over time can help you gauge whether persistence paid off or adjustments were necessary. Reflecting on past decisions and their consequences enables you to fine-tune your approach. Ultimately, the right balance lies in flexibility paired with determination—understanding when to stay the course versus recognizing it's time to pivot based on new information or circumstances.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/XrDB-2-P7REkrTOqI2w1f.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/dxvMVmrSo1MqlNT5eVV0U.png"></video> <p style="text-align: center;"></br>Human: That's helpful. Thank you.</br></br>AI: You're welcome. Balancing perseverance with adaptability is crucial. Continue to reflect on your experiences and adjust accordingly. The journey of understanding this interplay between goals and circumstances will only lead to greater clarity moving forward.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/Vn3IrP6U6JE4YXg3J99Nm.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/WB_JMshMxjV0R8Tg6_4nl.png"></video> <p style="text-align: center;">All quotes/conversations above from TalktoBooks - The Odyssey by Homer model.</br></br></p> <h1 style="text-align: center;">Utilize <a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Substack</a> to maximize the potential of this model.</h1> <p style="text-align: center;">To maximize the potential of our models, we provide professional guidance and strategies on utilizing HigherMind's domain-specific AI capabilities. Additionally, we offer pre-release/experimental models and direct messaging access to our team. Our subscription is set at the lowest tier that Substack allows, at $5 per month or $30 per year.</br></br>Join the HigherMind community to unlock the full potential of these models and gain real-world benefits. A subscription is highly recommended if you plan to use this model for practical applications. By subscribing, you not only support our efforts but also enhance your own experience. If you're just exploring there is no need to subscribe. Enjoy!</br></p> <h2 style="text-align: center;"><a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Unlocking the Power of AI Conversation</a> will take you from user to expert very quickly.</br></br></h2> <h2 style="text-align: center;">TO DOWNLOAD/USE THIS MODEL FOR FREE</br></br>1. Download <a href="https://lmstudio.ai/" style="color: blue;" target="_blank">LM Studio</a></br>2. Copy the model name:</br>HigherMind/TalktoBooks-The-Odyssey-Q8_0-GGUF</br></br>3.Paste into LM Studio search bar and click GO</br>4. Download the model</br>5. System Prompt:</br>The following is a conversation with an AI that is an expert on the literature, concepts and ideas in the book The Odyssey by Homer. She is very honest. She wants to help the human understand whatever it is they want to understand using the insights within The Odyssey.</br></br>Have your first conversation with TalktoBooks The Odyssey by Homer</br></br><a href="https://youtu.be/IJrQUaS2XPc" style="color: blue;" target="_blank">Watch 1 min video </a>on how to do all this.</h2> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/-nuPldDZBjt8XKBsW6-32.png) <p style="text-align: center;">|| <a href="https://www.highermind.ai" style="color: blue;" target="_blank">HigherMind.ai</a> ||</br></p> <p style="text-align: center;">With LM Studio your HigherMind models are stored locally, private and can be used offline. HigherMind models are experts for a specific domain outperforming OpenAI frontier models by 200-300%, they are not a generalist model. You may need to download multiple HigherMind models.</br>Base model mistralai/Mistral-7B-Instruct-v0.3 32k context.</br></br>All HigherMind models trained using off grid solar power. Not because we feel guilted into it but because it's amazing you can.</p>
roequitz/bart-abs-2409-1947-lr-0.0003-bs-2-maxep-10
roequitz
2024-09-24T22:18:17Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T22:17:05Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-2-maxep-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-2-maxep-10 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.4055 - Rouge/rouge1: 0.2434 - Rouge/rouge2: 0.0406 - Rouge/rougel: 0.2027 - Rouge/rougelsum: 0.203 - Bertscore/bertscore-precision: 0.8533 - Bertscore/bertscore-recall: 0.8523 - Bertscore/bertscore-f1: 0.8528 - Meteor: 0.194 - Gen Len: 34.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.6036 | 1.0 | 434 | 6.4324 | 0.2828 | 0.0721 | 0.2287 | 0.229 | 0.8776 | 0.863 | 0.8702 | 0.1865 | 29.0 | | 0.6828 | 2.0 | 868 | 5.8330 | 0.2817 | 0.0594 | 0.2119 | 0.2118 | 0.8553 | 0.8592 | 0.8572 | 0.2216 | 38.0 | | 0.5861 | 3.0 | 1302 | 6.4604 | 0.2456 | 0.0556 | 0.1909 | 0.1914 | 0.8622 | 0.8598 | 0.861 | 0.2363 | 41.0 | | 0.4888 | 4.0 | 1736 | 6.3098 | 0.2591 | 0.0557 | 0.2009 | 0.2012 | 0.859 | 0.8605 | 0.8597 | 0.2511 | 47.0 | | 0.4314 | 5.0 | 2170 | 6.7076 | 0.3042 | 0.0843 | 0.2354 | 0.2357 | 0.8695 | 0.8605 | 0.8649 | 0.22 | 32.0 | | 0.379 | 6.0 | 2604 | 6.7173 | 0.2844 | 0.0777 | 0.2203 | 0.22 | 0.8641 | 0.8631 | 0.8635 | 0.2167 | 34.0 | | 0.3245 | 7.0 | 3038 | 6.8382 | 0.2613 | 0.0803 | 0.214 | 0.2142 | 0.8711 | 0.8469 | 0.8588 | 0.2102 | 25.0 | | 0.289 | 8.0 | 3472 | 7.0253 | 0.2396 | 0.0481 | 0.1978 | 0.198 | 0.8526 | 0.8544 | 0.8534 | 0.2079 | 34.0 | | 0.2575 | 9.0 | 3906 | 7.1264 | 0.3089 | 0.0693 | 0.2341 | 0.2343 | 0.8686 | 0.8645 | 0.8665 | 0.2205 | 36.0 | | 0.2312 | 10.0 | 4340 | 7.4055 | 0.2434 | 0.0406 | 0.2027 | 0.203 | 0.8533 | 0.8523 | 0.8528 | 0.194 | 34.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
NOTURBU/detr-finetuned-balloon-v2
NOTURBU
2024-09-24T22:17:32Z
173
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-09-24T14:56:39Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Victor59155/distilbert-base-uncased-finetuned-tech
Victor59155
2024-09-24T22:08:01Z
119
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-09-24T20:57:44Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-tech results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-tech This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.3699 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.817 | 1.0 | 1563 | 1.5113 | | 1.5406 | 2.0 | 3126 | 1.3992 | | 1.4638 | 3.0 | 4689 | 1.3634 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
roequitz/bart-abs-2409-1947-lr-0.0003-bs-2-maxep-6
roequitz
2024-09-24T21:50:55Z
89
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T21:50:16Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-2-maxep-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-2-maxep-6 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.1441 - Rouge/rouge1: 0.3035 - Rouge/rouge2: 0.072 - Rouge/rougel: 0.2428 - Rouge/rougelsum: 0.2429 - Bertscore/bertscore-precision: 0.8724 - Bertscore/bertscore-recall: 0.8571 - Bertscore/bertscore-f1: 0.8646 - Meteor: 0.2108 - Gen Len: 29.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 0.955 | 1.0 | 434 | 5.7740 | 0.2485 | 0.0542 | 0.1867 | 0.1867 | 0.8491 | 0.8557 | 0.8523 | 0.2552 | 54.0 | | 0.8434 | 2.0 | 868 | 5.6927 | 0.2599 | 0.0592 | 0.202 | 0.2017 | 0.8553 | 0.8626 | 0.8589 | 0.2583 | 46.0 | | 0.5948 | 3.0 | 1302 | 5.8907 | 0.2473 | 0.0528 | 0.1834 | 0.1833 | 0.8458 | 0.856 | 0.8508 | 0.2569 | 54.0 | | 0.4377 | 4.0 | 1736 | 6.5736 | 0.2591 | 0.0557 | 0.2009 | 0.2012 | 0.859 | 0.8605 | 0.8597 | 0.2511 | 48.0 | | 0.3416 | 5.0 | 2170 | 6.8597 | 0.3035 | 0.072 | 0.2428 | 0.2429 | 0.8724 | 0.8571 | 0.8646 | 0.2108 | 29.0 | | 0.268 | 6.0 | 2604 | 7.1441 | 0.3035 | 0.072 | 0.2428 | 0.2429 | 0.8724 | 0.8571 | 0.8646 | 0.2108 | 29.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
roequitz/bart-abs-2409-1947-lr-0.0003-bs-4-maxep-10
roequitz
2024-09-24T21:33:20Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T21:32:37Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-4-maxep-10 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-4-maxep-10 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.1633 - Rouge/rouge1: 0.2439 - Rouge/rouge2: 0.0504 - Rouge/rougel: 0.2065 - Rouge/rougelsum: 0.2067 - Bertscore/bertscore-precision: 0.8544 - Bertscore/bertscore-recall: 0.8581 - Bertscore/bertscore-f1: 0.8562 - Meteor: 0.229 - Gen Len: 45.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 3.3364 | 1.0 | 217 | 3.8952 | 0.2911 | 0.0832 | 0.2351 | 0.2361 | 0.8685 | 0.8711 | 0.8697 | 0.2279 | 43.0 | | 2.369 | 2.0 | 434 | 4.0594 | 0.2603 | 0.0584 | 0.2204 | 0.2202 | 0.871 | 0.8545 | 0.8626 | 0.2129 | 35.0 | | 1.4708 | 3.0 | 651 | 4.6061 | 0.2722 | 0.0714 | 0.2029 | 0.2031 | 0.8612 | 0.8618 | 0.8615 | 0.2582 | 45.0 | | 0.9251 | 4.0 | 868 | 5.2239 | 0.2333 | 0.0475 | 0.1761 | 0.1762 | 0.8431 | 0.8562 | 0.8495 | 0.2342 | 58.8273 | | 0.6367 | 5.0 | 1085 | 5.8193 | 0.2622 | 0.0744 | 0.2001 | 0.1997 | 0.8634 | 0.8616 | 0.8625 | 0.1982 | 32.0 | | 0.486 | 6.0 | 1302 | 6.2975 | 0.2591 | 0.0557 | 0.2009 | 0.2012 | 0.859 | 0.8605 | 0.8597 | 0.2511 | 48.0091 | | 0.3892 | 7.0 | 1519 | 6.5002 | 0.2582 | 0.0781 | 0.2156 | 0.2154 | 0.8771 | 0.8626 | 0.8697 | 0.1855 | 29.0 | | 0.3152 | 8.0 | 1736 | 6.7352 | 0.313 | 0.0882 | 0.2413 | 0.2416 | 0.8789 | 0.8681 | 0.8735 | 0.2252 | 34.0 | | 0.2751 | 9.0 | 1953 | 6.9970 | 0.2906 | 0.0847 | 0.2272 | 0.2274 | 0.8671 | 0.8567 | 0.8618 | 0.1991 | 27.0 | | 0.24 | 10.0 | 2170 | 7.1633 | 0.2439 | 0.0504 | 0.2065 | 0.2067 | 0.8544 | 0.8581 | 0.8562 | 0.229 | 45.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
nikest/llama3-8b-nps-q4
nikest
2024-09-24T21:32:31Z
60
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-09-24T21:29:59Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ericu950/Papy_1_Llama-3.1-8B-Instruct_date
Ericu950
2024-09-24T21:21:45Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "papyrology", "epigraphy", "philology", "conversational", "dataset:Ericu950/Papyri_1", "arxiv:2409.13870", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-19T18:16:47Z
--- license: apache-2.0 datasets: - Ericu950/Papyri_1 base_model: - meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers tags: - papyrology - epigraphy - philology --- # Papy_1_Llama-3.1-8B-Instruct_date This is a fine-tuned version of the Llama-3.1-8B-Instruct model, specialized in assigning a date to Greek documentary papyri. On a test set of 2,295 unseen papyri its predictions were, on average, 21.7 years away from the actual date spans. See https://arxiv.org/abs/2409.13870. ## Dataset This model was finetuned on the Ericu950/Papyri_1 dataset, which consists of Greek documentary papyri editions and their corresponding dates and geographical attributions sourced from the amazing Papyri.info. ## Usage To run the model on a GPU with large memory capacity, follow these steps: ### 1. Download and load the model ```python import json from transformers import pipeline, AutoTokenizer, LlamaForCausalLM import torch model_id = "Ericu950/Papy_1_Llama-3.1-8B-Instruct_date" model = LlamaForCausalLM.from_pretrained( model_id, device_map="auto", ) tokenizer = AutoTokenizer.from_pretrained(model_id) generation_pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, device_map="auto", ) ``` ### 2. Run inference on a papyrus fragment of your choice ```python # This is a rough transcription of Pap.Ups. 106 papyrus_edition = """ ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------ ομολογει παυσιριων απολλωνιου του παuσιριωνοσ μητροσ ---------------τωι γεγονοτι αυτωι εκ τησ γενομενησ και μετηλλαχυιασ αυτου γυναικοσ ------------------------- απο τησ αυτησ πολεωσ εν αγυιαι συγχωρειν ειναι ---------------------------------- --------------------σ αυτωι εξ ησ συνεστιν ------------------------------------ ----τησ αυτησ γενεασ την υπαρχουσαν αυτωι οικιαν ------------ ------------------ ---------καὶ αιθριον και αυλη απερ ο υιοσ διοκοροσ -------------------------- --------εγραψεν του δ αυτου διοσκορου ειναι ------------------------------------ ---------- και προ κατενγεγυηται τα δικαια -------------------------------------- νησ κατα τουσ τησ χωρασ νομουσ· εαν δε μη --------------------------------------- υπ αυτου τηι του διοσκορου σημαινομενηι -----------------------------------ενοικισμωι του ημισουσ μερουσ τησ προκειμενησ οικιασ --------------------------------- διοσκοροσ την τουτων αποχην ---------------------------------------------μηδ υπεναντιον τουτοισ επιτελειν μηδε ------------------------------------------------ ανασκευηι κατ αυτησ τιθεσθαι ομολογιαν μηδε ----------------------------------- επιτελεσαι η χωρισ του κυρια ειναι τα διομολογημενα παραβαινειν, εκτεινειν δε τον παραβησομενον τωι υιωι διοσκορωι η τοισ παρ αυτου καθ εκαστην εφοδον το τε βλαβοσ και επιτιμον αργυριου δραχμασ 0 και εισ το δημοσιον τασ ισασ και μηθεν ησσον· δ -----ιων ομολογιαν συνεχωρησεν· """ system_prompt = "Date this papyrus fragment to an exact year!" input_messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": papyrus_edition}, ] terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = generation_pipeline( input_messages, max_new_tokens=4, num_beams=45, # Set this as high as your memory will allow! num_return_sequences=1, early_stopping=True, ) beam_contents = [] for output in outputs: generated_text = output.get('generated_text', []) for item in generated_text: if item.get('role') == 'assistant': beam_contents.append(item.get('content')) real_response = "71 or 72 AD" print(f"Year: {real_response}") for i, content in enumerate(beam_contents, start=1): print(f"Suggestion {i}: {content}") ``` ### Expected Output: ``` Year: 71 or 72 AD Suggestion 1: 71 ``` ## Usage on free tier in Google Colab If you don’t have access to a larger GPU but want to try the model out, you can run it in a quantized format in Google Colab. **The quality of the responses might deteriorate significantly.** Follow these steps: ### Step 1: Connect to free GPU 1. Click Connect arrow_drop_down near the top right of the notebook. 2. Select Change runtime type. 3. In the modal window, select T4 GPU as your hardware accelerator. 4. Click Save. 5. Click the Connect button to connect to your runtime. After some time, the button will present a green checkmark, along with RAM and disk usage graphs. This indicates that a server has successfully been created with your required hardware. ### Step 2: Install Dependencies ```python !pip install -U bitsandbytes import os os._exit(00) ``` ### Step 3: Download and quantize the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline import torch quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained("Ericu950/Papy_1_Llama-3.1-8B-Instruct_date", device_map = "auto", quantization_config = quant_config) tokenizer = AutoTokenizer.from_pretrained("Ericu950/Papy_1_Llama-3.1-8B-Instruct_date") generation_pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, device_map="auto", ) ``` ### Step 4: Run inference on a papyrus fragment of your choice ```python # This is a rough transcription of Pap.Ups. 106 papyrus_edition = """ ετουσ τεταρτου αυτοκρατοροσ καισαροσ ουεσπασιανου σεβαστου ------------------ ομολογει παυσιριων απολλωνιου του παuσιριωνοσ μητροσ ---------------τωι γεγονοτι αυτωι εκ τησ γενομενησ και μετηλλαχυιασ αυτου γυναικοσ ------------------------- απο τησ αυτησ πολεωσ εν αγυιαι συγχωρειν ειναι ---------------------------------- --------------------σ αυτωι εξ ησ συνεστιν ------------------------------------ ----τησ αυτησ γενεασ την υπαρχουσαν αυτωι οικιαν ------------ ------------------ ---------καὶ αιθριον και αυλη απερ ο υιοσ διοκοροσ -------------------------- --------εγραψεν του δ αυτου διοσκορου ειναι ------------------------------------ ---------- και προ κατενγεγυηται τα δικαια -------------------------------------- νησ κατα τουσ τησ χωρασ νομουσ· εαν δε μη --------------------------------------- υπ αυτου τηι του διοσκορου σημαινομενηι -----------------------------------ενοικισμωι του ημισουσ μερουσ τησ προκειμενησ οικιασ --------------------------------- διοσκοροσ την τουτων αποχην ---------------------------------------------μηδ υπεναντιον τουτοισ επιτελειν μηδε ------------------------------------------------ ανασκευηι κατ αυτησ τιθεσθαι ομολογιαν μηδε ----------------------------------- επιτελεσαι η χωρισ του κυρια ειναι τα διομολογημενα παραβαινειν, εκτεινειν δε τον παραβησομενον τωι υιωι διοσκορωι η τοισ παρ αυτου καθ εκαστην εφοδον το τε βλαβοσ και επιτιμον αργυριου δραχμασ 0 και εισ το δημοσιον τασ ισασ και μηθεν ησσον· δ -----ιων ομολογιαν συνεχωρησεν·""" system_prompt = "Date this papyrus fragment to an exact year!" input_messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": papyrus_edition}, ] outputs = generation_pipeline( input_messages, max_new_tokens=4, num_beams=10, num_return_sequences=1, early_stopping=True, ) beam_contents = [] for output in outputs: generated_text = output.get('generated_text', []) for item in generated_text: if item.get('role') == 'assistant': beam_contents.append(item.get('content')) real_response = "71 or 72 AD" print(f"Year: {real_response}") for i, content in enumerate(beam_contents, start=1): print(f"Suggestion {i}: {content}") ``` ### Expected Output: ``` Year: 71 or 72 AD Suggestion 1: 71 ```
ssmits/Qwen2.5-125B-Instruct
ssmits
2024-09-24T21:15:59Z
6
0
null
[ "safetensors", "qwen2", "chat", "text-generation", "conversational", "en", "base_model:ssmits/Qwen2.5-125B-Instruct", "base_model:finetune:ssmits/Qwen2.5-125B-Instruct", "license:other", "region:us" ]
text-generation
2024-09-24T20:20:38Z
--- license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE language: - en pipeline_tag: text-generation base_model: - ssmits/Qwen2.5-125B-Instruct tags: - chat --- # Qwen-125B Qwen-125B is a [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B) self-merge made with [MergeKit](https://github.com/arcee-ai/mergekit/tree/main). It was inspired by large merges like: - [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b) - [cognitivecomputations/MegaDolphin-120b](https://huggingface.co/cognitivecomputations/MegaDolphin-120b) - [mlabonne/Meta-Llama-3-120B-Instruct](https://huggingface.co/mlabonne/Meta-Llama-3-120B-Instruct) Special thanks to [Eric Hartford](https://huggingface.co/ehartford) for both inspiring and evaluating the original model, to [Charles Goddard](https://huggingface.co/chargoddard) for creating MergeKit, and to [Mathieu Labonne](https://huggingface.co/mlabonne) for creating the Meta-Llama-3-120B-Instruct model that served as the main inspiration for this merge. ## 🔍 Applications This model is recommended for creative writing tasks. It uses the Qwen chat template with a default context window of 8K (can be extended with rope theta). The model is generally quite creative and has a good writing style. It may occasionally output typos and show a preference for uppercase text. ## ⚡ Quantized models To be quantized. * **GGUF**: [Link to GGUF model] * **EXL2**: [Link to EXL2 model] * **mlx**: [Link to mlx model] ## 🏆 Evaluation This model has yet to be thoroughly evaluated. It is expected to excel in creative writing but may have limitations in other tasks. Use it with caution and don't expect it to outperform state-of-the-art models outside of specific creative use cases. Once the model is created and tested, this section will be updated with: * Links to evaluation threads on social media platforms * Examples of the model's performance in creative writing tasks * Comparisons with other large language models in various applications * Community feedback and use cases We encourage users to share their experiences and evaluations to help build a comprehensive understanding of the model's capabilities and limitations. ## 🧩 Configuration ```yaml slices: - sources: - layer_range: [0, 20] model: Qwen/Qwen-72B - sources: - layer_range: [10, 30] model: Qwen/Qwen-72B - sources: - layer_range: [20, 40] model: Qwen/Qwen-72B - sources: - layer_range: [30, 50] model: Qwen/Qwen-72B - sources: - layer_range: [40, 60] model: Qwen/Qwen-72B - sources: - layer_range: [50, 70] model: Qwen/Qwen-72B - sources: - layer_range: [60, 80] model: Qwen/Qwen-72B merge_method: passthrough dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "ssmits/Qwen2.5-125B-Instruct" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"])
roequitz/bart-abs-2409-1947-lr-0.0003-bs-4-maxep-6
roequitz
2024-09-24T21:09:59Z
91
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:sshleifer/distilbart-xsum-12-6", "base_model:finetune:sshleifer/distilbart-xsum-12-6", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-09-24T21:09:17Z
--- library_name: transformers license: apache-2.0 base_model: sshleifer/distilbart-xsum-12-6 tags: - generated_from_trainer model-index: - name: bart-abs-2409-1947-lr-0.0003-bs-4-maxep-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-abs-2409-1947-lr-0.0003-bs-4-maxep-6 This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.3423 - Rouge/rouge1: 0.2439 - Rouge/rouge2: 0.0504 - Rouge/rougel: 0.2065 - Rouge/rougelsum: 0.2067 - Bertscore/bertscore-precision: 0.8544 - Bertscore/bertscore-recall: 0.8581 - Bertscore/bertscore-f1: 0.8562 - Meteor: 0.229 - Gen Len: 46.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge/rouge1 | Rouge/rouge2 | Rouge/rougel | Rouge/rougelsum | Bertscore/bertscore-precision | Bertscore/bertscore-recall | Bertscore/bertscore-f1 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:------:|:-------:| | 2.1377 | 1.0 | 217 | 3.9220 | 0.3283 | 0.1009 | 0.2612 | 0.2612 | 0.8785 | 0.8638 | 0.871 | 0.2695 | 33.0 | | 3.3219 | 2.0 | 434 | 3.7523 | 0.2756 | 0.0805 | 0.2368 | 0.237 | 0.8845 | 0.8545 | 0.8692 | 0.2111 | 25.0 | | 2.115 | 3.0 | 651 | 4.0783 | 0.282 | 0.0747 | 0.2116 | 0.2118 | 0.8663 | 0.8623 | 0.8642 | 0.2582 | 41.0 | | 1.1461 | 4.0 | 868 | 4.8795 | 0.251 | 0.0501 | 0.21 | 0.2102 | 0.8497 | 0.8506 | 0.8501 | 0.2025 | 37.0 | | 0.6272 | 5.0 | 1085 | 5.8094 | 0.2811 | 0.0751 | 0.229 | 0.2293 | 0.8628 | 0.8693 | 0.866 | 0.2058 | 44.0 | | 0.3841 | 6.0 | 1302 | 6.3423 | 0.2439 | 0.0504 | 0.2065 | 0.2067 | 0.8544 | 0.8581 | 0.8562 | 0.229 | 46.0 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
Jios/TON_IoT_no_injection
Jios
2024-09-24T20:55:53Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:FacebookAI/roberta-large", "base_model:finetune:FacebookAI/roberta-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-24T17:44:06Z
--- library_name: transformers license: mit base_model: roberta-large tags: - generated_from_trainer model-index: - name: TON_IoT_no_injection results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TON_IoT_no_injection This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0028 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2049 | 1.0 | 750 | 0.0222 | | 0.0343 | 2.0 | 1500 | 0.0188 | | 0.0194 | 3.0 | 2250 | 0.0117 | | 0.0141 | 4.0 | 3000 | 0.0059 | | 0.006 | 5.0 | 3750 | 0.0028 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
AfnanTS/ARBERTv2_ArLAMA
AfnanTS
2024-09-24T20:44:06Z
36
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "ar", "dataset:AfnanTS/Final_ArLAMA_DS_tokenized_for_ARBERTv2", "base_model:UBC-NLP/ARBERTv2", "base_model:finetune:UBC-NLP/ARBERTv2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-05-25T15:30:42Z
--- datasets: - AfnanTS/Final_ArLAMA_DS_tokenized_for_ARBERTv2 language: - ar base_model: - UBC-NLP/ARBERTv2 pipeline_tag: fill-mask --- <img src="./arab_icon2.png" alt="Model Logo" width="30%" height="30%" align="right"/> **ARBERTv2_ArLAMA** is a transformer-based Arabic language model fine-tuned on Masked Language Modeling (MLM) tasks. The model uses Knowledge Graphs (KGs) to enhance its understanding of semantic relations and improve its performance in various Arabic NLP tasks. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use Filling masked tokens in Arabic text, particularly in contexts enriched with knowledge from KGs. ### Downstream Use Can be further fine-tuned for Arabic NLP tasks that require semantic understanding, such as text classification or question answering. ## How to Get Started with the Model ```python from transformers import pipeline fill_mask = pipeline("fill-mask", model="AfnanTS/ARBERTv2_ArLAMA") fill_mask("اللغة [MASK] مهمة جدا." ``` ## Training Details ### Training Data Trained on the ArLAMA dataset, which is designed to represent Knowledge Graphs in natural language. ### Training Procedure Continued pre-training of ArBERTv2 using Masked Language Modeling (MLM) tasks, integrating structured knowledge from Knowledge Graphs.
Aoishi19/autotrain-supermarket2
Aoishi19
2024-09-24T20:31:40Z
5
0
null
[ "tensorboard", "safetensors", "vit", "autotrain", "image-classification", "base_model:Falconsai/nsfw_image_detection", "base_model:finetune:Falconsai/nsfw_image_detection", "region:us" ]
image-classification
2024-09-24T17:35:08Z
--- tags: - autotrain - image-classification base_model: Falconsai/nsfw_image_detection widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # Model Trained Using AutoTrain - Problem type: Image Classification ## Validation Metrics loss: 0.5076770186424255 f1_macro: 0.9034486384853411 f1_micro: 0.9506314580941446 f1_weighted: 0.9453900122810057 precision_macro: 0.9375712446090816 precision_micro: 0.9506314580941446 precision_weighted: 0.9538751552377964 recall_macro: 0.89997224997225 recall_micro: 0.9506314580941446 recall_weighted: 0.9506314580941446 accuracy: 0.9506314580941446
Judah04/SpeechT5-Hausa-5
Judah04
2024-09-24T20:06:28Z
8
0
transformers
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:microsoft/speecht5_tts", "base_model:finetune:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
text-to-audio
2024-09-24T16:51:39Z
--- library_name: transformers license: mit base_model: microsoft/speecht5_tts tags: - generated_from_trainer datasets: - common_voice_17_0 model-index: - name: SpeechT5-Hausa-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SpeechT5-Hausa-5 This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 2000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.6245 | 1.8476 | 100 | 0.5508 | | 0.5758 | 3.6952 | 200 | 0.5566 | | 0.5463 | 5.5427 | 300 | 0.5015 | | 0.5299 | 7.3903 | 400 | 0.4968 | | 0.5139 | 9.2379 | 500 | 0.4792 | | 0.5132 | 11.0855 | 600 | 0.4823 | | 0.4982 | 12.9330 | 700 | 0.4640 | | 0.4889 | 14.7806 | 800 | 0.4649 | | 0.4841 | 16.6282 | 900 | 0.4601 | | 0.4795 | 18.4758 | 1000 | 0.4631 | | 0.4779 | 20.3233 | 1100 | 0.4592 | | 0.4642 | 22.1709 | 1200 | 0.4651 | | 0.4618 | 24.0185 | 1300 | 0.4599 | | 0.4583 | 25.8661 | 1400 | 0.4634 | | 0.4584 | 27.7136 | 1500 | 0.4592 | | 0.4539 | 29.5612 | 1600 | 0.4604 | | 0.4498 | 31.4088 | 1700 | 0.4642 | | 0.4428 | 33.2564 | 1800 | 0.4677 | | 0.4517 | 35.1039 | 1900 | 0.4705 | | 0.4371 | 36.9515 | 2000 | 0.4702 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
mniazm/distilbert-base-uncased-finetuned-emotion
mniazm
2024-09-24T20:06:26Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-09-24T15:10:10Z
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2139 - Accuracy: 0.929 - F1: 0.9290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8053 | 1.0 | 250 | 0.3047 | 0.9095 | 0.9086 | | 0.2429 | 2.0 | 500 | 0.2139 | 0.929 | 0.9290 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
eljanmahammadli/micro-llama-300M-v1
eljanmahammadli
2024-09-24T20:01:57Z
236
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T20:01:11Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF
mradermacher
2024-09-24T19:54:55Z
86
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:Moriacrafter/Qwen1.5-0.5B-none_DepressionDetection_v2", "base_model:quantized:Moriacrafter/Qwen1.5-0.5B-none_DepressionDetection_v2", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T19:41:28Z
--- base_model: Moriacrafter/Qwen1.5-0.5B-none_DepressionDetection_v2 language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/Moriacrafter/Qwen1.5-0.5B-none_DepressionDetection_v2 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q2_K.gguf) | Q2_K | 0.3 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.IQ3_XS.gguf) | IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.IQ3_M.gguf) | IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q3_K_L.gguf) | Q3_K_L | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.IQ4_XS.gguf) | IQ4_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q5_K_S.gguf) | Q5_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q6_K.gguf) | Q6_K | 0.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen1.5-0.5B-none_DepressionDetection_v2-GGUF/resolve/main/Qwen1.5-0.5B-none_DepressionDetection_v2.f16.gguf) | f16 | 1.0 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Kvisten/nocco-apple-flux-v7
Kvisten
2024-09-24T19:47:08Z
8
1
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "ai-toolkit", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-24T19:46:54Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - ai-toolkit widget: - text: a green aluminum can of 'NOCCO BCAA+ Apple' energy drink on a rock in a foggy forest output: url: samples/1727207060662__000002500_0.jpg - text: a green aluminum can of 'NOCCO BCAA+ Apple' energy drink can in a hand of a person output: url: samples/1727207097408__000002500_1.jpg - text: Green Nocco Apple Can on a sandy beach at sunset, surrounded by seashells and gentle ocean waves output: url: samples/1727207134150__000002500_2.jpg - text: Taylor swift holding a NOCCO BCAA+ Apple energy drink can output: url: samples/1727207170894__000002500_3.jpg - text: Green Nocco Apple Can, green aluminum can dropping into crystal-clear water, apple taste, commercial-style, the water should create dramatic splashes and bubbles, surrounding the can in all directions, capturing the moment of impact, high-resolution, colorful, (from above:1.2), photo by Gregory Colbert output: url: samples/1727207207631__000002500_4.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: Green Nocco Apple Can license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # nocco_apple_v6 Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) <Gallery /> ## Trigger words You should use `Green Nocco Apple Can` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc. Weights for this model are available in Safetensors format. [Download](/Kvisten/nocco-apple-flux-v7/tree/main) them in the Files & versions tab. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda') pipeline.load_lora_weights('Kvisten/nocco-apple-flux-v7', weight_name='nocco_apple_v6.safetensors') image = pipeline('a green aluminum can of 'NOCCO BCAA+ Apple' energy drink on a rock in a foggy forest').images[0] image.save("my_image.png") ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
awni00/T-sa32-1.3B
awni00
2024-09-24T19:46:41Z
6
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-09-24T19:44:34Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
cointegrated/bert-char-ctc-en-ru-translit-v0
cointegrated
2024-09-24T19:43:50Z
139
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-09-24T19:19:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID This is a model for trainable transliteration from Latin (English but not only) to Russian Cyrillic **How to use:** ``` import torch from transformers import BertForMaskedLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("cointegrated/bert-char-ctc-en-ru-translit-v0", trust_remote_code=True) model = BertForMaskedLM.from_pretrained("cointegrated/bert-char-ctc-en-ru-translit-v0") text = 'Hello world! My name is David Dale, and yours is Schwarzenegger?' with torch.inference_mode(): batch = tokenizer(text, return_tensors='pt', spaces=1, padding=True).to(model.device) logits = torch.log_softmax(model(**batch).logits, axis=-1) print(tokenizer.decode(logits[0].argmax(-1), skip_special_tokens=True)) # хэло Уорлд май нэйм из дэвид дэйл энд ёрз из скУорзэнэгжэр ``` The argument `spaces` could be from 0 to 4, and it affects the results; a recommended value is 2. **Why use:** - Just for fun - To augment your training data, if for some reason you want to make it robust to script changes ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
awni00/T-sa24-757M
awni00
2024-09-24T19:41:03Z
6
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
2024-09-24T19:39:56Z
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
bartekupartek/llm2vec-qwen2.5-0.5-instruct
bartekupartek
2024-09-24T19:29:46Z
5
0
null
[ "safetensors", "qwen2", "generated_from_trainer", "dataset:wikitext", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "model-index", "region:us" ]
null
2024-09-24T19:10:41Z
--- license: apache-2.0 base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - generated_from_trainer datasets: - wikitext metrics: - accuracy model-index: - name: llm2vec-qwen2.5-0.5-instruct results: - task: name: Masked Language Modeling type: fill-mask dataset: name: wikitext wikitext-103-raw-v1 type: wikitext args: wikitext-103-raw-v1 metrics: - name: Accuracy type: accuracy value: 0.629556877924779 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llm2vec-qwen2.5-0.5-instruct This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the wikitext wikitext-103-raw-v1 dataset. It achieves the following results on the evaluation set: - Loss: 1.8264 - Accuracy: 0.6296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.0083 | 100 | 2.3376 | 0.5511 | | No log | 0.0166 | 200 | 2.1736 | 0.5765 | | No log | 0.0248 | 300 | 2.0679 | 0.5930 | | No log | 0.0331 | 400 | 1.9839 | 0.6056 | | 2.2761 | 0.0414 | 500 | 1.9611 | 0.6085 | | 2.2761 | 0.0497 | 600 | 1.9054 | 0.6203 | | 2.2761 | 0.0580 | 700 | 1.8838 | 0.6242 | | 2.2761 | 0.0662 | 800 | 1.8403 | 0.6296 | | 2.2761 | 0.0745 | 900 | 1.8235 | 0.6300 | | 1.8887 | 0.0828 | 1000 | 1.7920 | 0.6351 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.0 - Tokenizers 0.19.1
Solshine/Qwen2.5-142B-Doubled72B-Math-Instruct-Mergekit-Merge
Solshine
2024-09-24T19:29:17Z
5
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen2.5-72B-Instruct", "base_model:merge:Qwen/Qwen2.5-72B-Instruct", "base_model:Qwen/Qwen2.5-Math-72B", "base_model:merge:Qwen/Qwen2.5-Math-72B", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T00:52:59Z
--- base_model: - Qwen/Qwen2.5-Math-72B - Qwen/Qwen2.5-72B-Instruct library_name: transformers tags: - mergekit - merge license: other --- ## Qwen2.5-142B-Doubled72B-Math-Instruct (Mergekit-Merge) by Solshine (Caleb DeLeeuw) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654527ce2a13610acc25d921/FSWNkg4h9W329CiBYIuiC.png) # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). # License Hippocratic License 3.0 + Ecocide module, + Extractive Industries module, + Copyleft [![Hippocratic License HL3-CL-ECO-EXTR](https://img.shields.io/static/v1?label=Hippocratic%20License&message=HL3-CL-ECO-EXTR&labelColor=5e2751&color=bc8c3d)](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html) https://firstdonoharm.dev/version/3/0/cl-eco-extr.txt ## Merge Details ### Merge Method This model was merged using the passthrough merge method. Every layer is doubled in order, from Qwen/Qwen2.5-72B-Instruct and Qwen/Qwen2.5-Math-72B, alternating which model is adding a layer and the MLP layers + 2 output layers only taken from the instruct model, creating 142B parameters. No additional fine-tune has been done in this merged model. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B) * [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [0, 1] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [0, 1] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [1, 2] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [1, 2] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [2, 3] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [2, 3] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [3, 4] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [3, 4] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [4, 5] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [4, 5] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [5, 6] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [5, 6] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [6, 7] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [6, 7] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [7, 8] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [7, 8] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [8, 9] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [8, 9] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [9, 10] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [9, 10] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [10, 11] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [10, 11] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [11, 12] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [11, 12] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [12, 13] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [12, 13] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [13, 14] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [13, 14] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [14, 15] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [14, 15] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [15, 16] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [15, 16] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [16, 17] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [16, 17] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [17, 18] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [17, 18] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [18, 19] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [18, 19] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [19, 20] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [19, 20] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [20, 21] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [20, 21] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [21, 22] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [21, 22] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [22, 23] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [22, 23] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [23, 24] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [23, 24] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [24, 25] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [24, 25] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [25, 26] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [25, 26] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [26, 27] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [26, 27] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [27, 28] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [27, 28] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [28, 29] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [28, 29] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [29, 30] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [29, 30] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [30, 31] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [30, 31] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [31, 32] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [31, 32] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [32, 33] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [32, 33] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [33, 34] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [33, 34] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [34, 35] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [34, 35] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [35, 36] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [35, 36] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [36, 37] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [36, 37] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [37, 38] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [37, 38] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [38, 39] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [38, 39] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [39, 40] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [39, 40] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [40, 41] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [40, 41] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [41, 42] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [41, 42] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [42, 43] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [42, 43] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [43, 44] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [43, 44] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [44, 45] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [44, 45] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [45, 46] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [45, 46] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [46, 47] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [46, 47] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [47, 48] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [47, 48] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [48, 49] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [48, 49] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [49, 50] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [49, 50] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [50, 51] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [50, 51] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [51, 52] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [51, 52] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [52, 53] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [52, 53] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [53, 54] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [53, 54] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [54, 55] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [54, 55] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [55, 56] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [55, 56] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [56, 57] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [56, 57] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [57, 58] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [57, 58] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [58, 59] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [58, 59] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [59, 60] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [59, 60] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [60, 61] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [60, 61] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [61, 62] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [61, 62] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [62, 63] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [62, 63] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [63, 64] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [63, 64] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [64, 65] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [64, 65] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [65, 66] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [65, 66] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [66, 67] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [66, 67] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [67, 68] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [67, 68] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [68, 69] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [68, 69] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [69, 70] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [69, 70] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [70, 71] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [70, 71] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [71, 72] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [71, 72] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [72, 73] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [72, 73] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [73, 74] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [73, 74] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [74, 75] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [74, 75] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [75, 76] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [75, 76] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [76, 77] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [76, 77] - sources: - model: Qwen/Qwen2.5-Math-72B layer_range: [77, 78] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [77, 78] - sources: - model: Qwen/Qwen2.5-72B-Instruct layer_range: [77, 80] merge_method: passthrough dtype: float16 ```
glaiveai/Reflection-Llama-3.1-70B
glaiveai
2024-09-24T19:25:48Z
16
9
null
[ "safetensors", "llama", "region:us" ]
null
2024-09-19T11:09:47Z
This repo is for letting people reproduce Reflection-70B benchmark scores. [https://glaive.ai/blog/post/reflection-postmortem](https://glaive.ai/blog/post/reflection-postmortem)
kiranpantha/w2v-bert-2.0-nepali-unlabeled-1
kiranpantha
2024-09-24T19:21:33Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "ne", "dataset:kiranpantha/OpenSLR54-Balanced-Nepali", "base_model:kiranpantha/w2v-bert-2.0-nepali", "base_model:finetune:kiranpantha/w2v-bert-2.0-nepali", "license:mit", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-09-24T17:01:51Z
--- library_name: transformers language: - ne license: mit base_model: kiranpantha/w2v-bert-2.0-nepali tags: - generated_from_trainer datasets: - kiranpantha/OpenSLR54-Balanced-Nepali metrics: - wer model-index: - name: Wave2Vec2-Bert2.0 - Kiran Pantha results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: OpenSLR54 type: kiranpantha/OpenSLR54-Balanced-Nepali config: default split: test args: 'config: ne, split: train,test' metrics: - name: Wer type: wer value: 0.4301989457575242 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wave2Vec2-Bert2.0 - Kiran Pantha This model is a fine-tuned version of [kiranpantha/w2v-bert-2.0-nepali](https://huggingface.co/kiranpantha/w2v-bert-2.0-nepali) on the OpenSLR54 dataset. It achieves the following results on the evaluation set: - Loss: 0.4052 - Wer: 0.4302 - Cer: 0.1029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | 0.7515 | 0.15 | 300 | 0.4814 | 0.4911 | 0.1183 | | 0.6554 | 0.3 | 600 | 0.5699 | 0.5382 | 0.1385 | | 0.6723 | 0.45 | 900 | 0.5463 | 0.5401 | 0.1395 | | 0.6635 | 0.6 | 1200 | 0.5244 | 0.5043 | 0.1250 | | 0.6132 | 0.75 | 1500 | 0.4725 | 0.4831 | 0.1184 | | 0.5786 | 0.9 | 1800 | 0.4620 | 0.4702 | 0.1147 | | 0.5639 | 1.05 | 2100 | 0.4810 | 0.4668 | 0.1140 | | 0.4863 | 1.2 | 2400 | 0.4639 | 0.4766 | 0.1151 | | 0.4784 | 1.35 | 2700 | 0.4527 | 0.4611 | 0.1108 | | 0.456 | 1.5 | 3000 | 0.4229 | 0.4458 | 0.1089 | | 0.4613 | 1.65 | 3300 | 0.4460 | 0.4478 | 0.1095 | | 0.4506 | 1.8 | 3600 | 0.4166 | 0.4413 | 0.1047 | | 0.4369 | 1.95 | 3900 | 0.4052 | 0.4302 | 0.1029 | ### Framework versions - Transformers 4.45.0.dev0 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf
RichardErkhov
2024-09-24T19:20:07Z
78
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T12:11:32Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-Dark-Planet-8B - GGUF - Model creator: https://huggingface.co/DavidAU/ - Original model: https://huggingface.co/DavidAU/L3-Dark-Planet-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3-Dark-Planet-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [L3-Dark-Planet-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [L3-Dark-Planet-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3-Dark-Planet-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [L3-Dark-Planet-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3-Dark-Planet-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [L3-Dark-Planet-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3-Dark-Planet-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3-Dark-Planet-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3-Dark-Planet-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3-Dark-Planet-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3-Dark-Planet-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [L3-Dark-Planet-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [L3-Dark-Planet-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [L3-Dark-Planet-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [L3-Dark-Planet-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [L3-Dark-Planet-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [L3-Dark-Planet-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [L3-Dark-Planet-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [L3-Dark-Planet-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [L3-Dark-Planet-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [L3-Dark-Planet-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/DavidAU_-_L3-Dark-Planet-8B-gguf/blob/main/L3-Dark-Planet-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: - mergekit - merge base_model: [] model-index: - name: L3-Dark-Planet-8B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 41.34 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 29.79 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 7.48 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 6.71 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 6.33 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.41 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DavidAU/L3-Dark-Planet-8B name: Open LLM Leaderboard --- <h2>L3-Dark-Planet-8B</h2> This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. For full information about this model, including: - Details about this model and its use case(s). - Context limits - Special usage notes / settings. - Any model(s) used to create this model. - Template(s) used to access/use this model. - Example generation(s) - GGUF quants of this model Please go to: [ https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF ] For "Dark Planet Horror City 8b" : [ https://huggingface.co/DavidAU/L3-Dark-Planet-Horror-City-8B-NEO-Imatrix-GGUF ] (To create GGUFs for Dark Planet Horror City 8B use the ".dat" file for Imatrix in this repo ) # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DavidAU__L3-Dark-Planet-8B) | Metric |Value| |-------------------|----:| |Avg. |20.34| |IFEval (0-Shot) |41.34| |BBH (3-Shot) |29.79| |MATH Lvl 5 (4-Shot)| 7.48| |GPQA (0-shot) | 6.71| |MuSR (0-shot) | 6.33| |MMLU-PRO (5-shot) |30.41|
XiangZ/hit-srf-2x
XiangZ
2024-09-24T19:02:47Z
30
0
null
[ "safetensors", "HiT-SR", "image super-resolution", "transformer", "efficient transformer", "image-to-image", "arxiv:2407.05878", "region:us" ]
image-to-image
2024-09-15T09:24:23Z
--- pipeline_tag: image-to-image tags: - HiT-SR - image super-resolution - transformer - efficient transformer --- <h1> HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution </h1> <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3> <div></div> HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens. Paper: https://huggingface.co/papers/2407.05878 🤗 Please refer to https://huggingface.co/XiangZ/hit-sr for usage.
XiangZ/hit-sng-4x
XiangZ
2024-09-24T19:02:05Z
7
0
null
[ "safetensors", "HiT-SR", "image super-resolution", "transformer", "efficient transformer", "image-to-image", "arxiv:2407.05878", "region:us" ]
image-to-image
2024-09-15T09:22:04Z
--- pipeline_tag: image-to-image tags: - HiT-SR - image super-resolution - transformer - efficient transformer --- <h1> HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution </h1> <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3> <div></div> HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens. Paper: https://huggingface.co/papers/2407.05878 🤗 Please refer to https://huggingface.co/XiangZ/hit-sr for usage.
XiangZ/hit-sng-3x
XiangZ
2024-09-24T19:01:30Z
6
0
null
[ "safetensors", "HiT-SR", "image super-resolution", "transformer", "efficient transformer", "image-to-image", "arxiv:2407.05878", "region:us" ]
image-to-image
2024-09-15T09:21:32Z
--- pipeline_tag: image-to-image tags: - HiT-SR - image super-resolution - transformer - efficient transformer --- <h1> HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution </h1> <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3> <div></div> HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens. Paper: https://huggingface.co/papers/2407.05878 🤗 Please refer to https://huggingface.co/XiangZ/hit-sr for usage.
XiangZ/hit-sir-3x
XiangZ
2024-09-24T19:00:35Z
8
0
null
[ "safetensors", "HiT-SR", "image super-resolution", "transformer", "efficient transformer", "image-to-image", "arxiv:2407.05878", "region:us" ]
image-to-image
2024-09-15T09:13:00Z
--- pipeline_tag: image-to-image tags: - HiT-SR - image super-resolution - transformer - efficient transformer --- <h1> HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution </h1> <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3> <div></div> HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens. Paper: https://huggingface.co/papers/2407.05878 🤗 Please refer to https://huggingface.co/XiangZ/hit-sr for usage.
guigoes/my-gemma2-finetune-lora
guigoes
2024-09-24T18:49:00Z
5
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T18:27:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Pullulation-2-9B-i1-GGUF
mradermacher
2024-09-24T18:14:06Z
61
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ClaudioItaly/Pullulation-2-9B", "base_model:quantized:ClaudioItaly/Pullulation-2-9B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-09-24T07:40:52Z
--- base_model: ClaudioItaly/Pullulation-2-9B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/ClaudioItaly/Pullulation-2-9B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Pullulation-2-9B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Pullulation-2-9B-i1-GGUF/resolve/main/Pullulation-2-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
dat-lequoc/fast-apply-v0.2-qwen2.5-Coder-7B-ft
dat-lequoc
2024-09-24T18:07:33Z
11
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T18:04:23Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kuhlconsulting/flux_eberhard
kuhlconsulting
2024-09-24T18:05:26Z
5
1
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-24T17:20:30Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: EberhardLinkedIn --- # Flux_Eberhard <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `EberhardLinkedIn` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('kuhlconsulting/flux_eberhard', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
davanstrien/colpali-query-gen
davanstrien
2024-09-24T17:55:39Z
56
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "conversational", "en", "base_model:Qwen/Qwen2-VL-2B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-2B-Instruct", "license:apache-2.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-09-24T17:46:02Z
--- license: apache-2.0 language: - en base_model: - Qwen/Qwen2-VL-2B-Instruct pipeline_tag: image-text-to-text library_name: transformers tags: - qwen2_vl ---
linsonfas/gr_project
linsonfas
2024-09-24T17:44:28Z
6
0
fasttext
[ "fasttext", "gguf", "llama", "zh", "arxiv:1910.09700", "base_model:yentinglin/Llama-3-Taiwan-8B-Instruct", "base_model:quantized:yentinglin/Llama-3-Taiwan-8B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-16T07:01:52Z
--- license: apache-2.0 language: - zh base_model: - yentinglin/Llama-3-Taiwan-8B-Instruct library_name: fasttext --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF
mradermacher
2024-09-24T17:37:13Z
270
3
transformers
[ "transformers", "gguf", "llm", "mistral", "chat", "instruct", "it", "abliterated", "en", "base_model:byroneverson/Mistral-Small-Instruct-2409-abliterated", "base_model:quantized:byroneverson/Mistral-Small-Instruct-2409-abliterated", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T01:04:07Z
--- base_model: byroneverson/Mistral-Small-Instruct-2409-abliterated language: - en library_name: transformers license: other license_link: https://mistral.ai/licenses/MRL-0.1.md license_name: mrl quantized_by: mradermacher tags: - llm - mistral - chat - instruct - it - abliterated --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/byroneverson/Mistral-Small-Instruct-2409-abliterated <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q2_K.gguf) | Q2_K | 8.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.IQ3_XS.gguf) | IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q3_K_S.gguf) | Q3_K_S | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.IQ3_S.gguf) | IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.IQ3_M.gguf) | IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q3_K_L.gguf) | Q3_K_L | 11.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.IQ4_XS.gguf) | IQ4_XS | 12.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q5_K_S.gguf) | Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q5_K_M.gguf) | Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q6_K.gguf) | Q6_K | 18.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-Small-Instruct-2409-abliterated-GGUF/resolve/main/Mistral-Small-Instruct-2409-abliterated.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
eswardivi/Llama3.1_LMR_TrainingOnly
eswardivi
2024-09-24T17:33:23Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T17:30:49Z
--- base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- # Uploaded model - **Developed by:** eswardivi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf
RichardErkhov
2024-09-24T17:23:49Z
5
0
null
[ "gguf", "arxiv:2403.19522", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T10:16:09Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3.1-Artemis-c-8B - GGUF - Model creator: https://huggingface.co/mergekit-community/ - Original model: https://huggingface.co/mergekit-community/L3.1-Artemis-c-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3.1-Artemis-c-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [L3.1-Artemis-c-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [L3.1-Artemis-c-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3.1-Artemis-c-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [L3.1-Artemis-c-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3.1-Artemis-c-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [L3.1-Artemis-c-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3.1-Artemis-c-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3.1-Artemis-c-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3.1-Artemis-c-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3.1-Artemis-c-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3.1-Artemis-c-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [L3.1-Artemis-c-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [L3.1-Artemis-c-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [L3.1-Artemis-c-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [L3.1-Artemis-c-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [L3.1-Artemis-c-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [L3.1-Artemis-c-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [L3.1-Artemis-c-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [L3.1-Artemis-c-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [L3.1-Artemis-c-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [L3.1-Artemis-c-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/mergekit-community_-_L3.1-Artemis-c-8B-gguf/blob/main/L3.1-Artemis-c-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: - failspy/Llama-3-8B-Instruct-MopeyMule - Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B - mergekit-community/L3-Boshima-a - kromeurus/L3.1-Ablaze-Vulca-v0.1-8B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [kromeurus/L3.1-Ablaze-Vulca-v0.1-8B](https://huggingface.co/kromeurus/L3.1-Ablaze-Vulca-v0.1-8B) as a base. ### Models Merged The following models were included in the merge: * [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) * [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) * [mergekit-community/L3-Boshima-a](https://huggingface.co/mergekit-community/L3-Boshima-a) ### Configuration The following YAML configuration was used to produce this model: ```yaml dtype: float32 out_dtype: bfloat16 merge_method: model_stock base_model: kromeurus/L3.1-Ablaze-Vulca-v0.1-8B models: - model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B parameters: weight: 0.8 - model: mergekit-community/L3-Boshima-a parameters: weight: 0.8 - model: kromeurus/L3.1-Ablaze-Vulca-v0.1-8B parameters: weight: 0.6 - model: failspy/Llama-3-8B-Instruct-MopeyMule parameters: weight: 0.4 parameters: normalize: true ```
Lofi-Granny/RiHa
Lofi-Granny
2024-09-24T17:23:06Z
5
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-24T15:20:20Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: RiHa --- # Riha <!-- <Gallery /> --> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `RiHa` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Lofi-Granny/RiHa', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
bartowski/BigQwen2.5-125B-Instruct-GGUF
bartowski
2024-09-24T17:22:31Z
9
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "lazymergekit", "text-generation", "en", "base_model:mlabonne/BigQwen2.5-125B-Instruct", "base_model:quantized:mlabonne/BigQwen2.5-125B-Instruct", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-09-24T12:14:19Z
--- base_model: mlabonne/BigQwen2.5-125B-Instruct language: - en library_name: transformers license: other license_name: tongyi-qianwen license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - mergekit - merge - lazymergekit quantized_by: bartowski --- ## Llamacpp imatrix Quantizations of BigQwen2.5-125B-Instruct Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3821">b3821</a> for quantization. Original model: https://huggingface.co/mlabonne/BigQwen2.5-125B-Instruct All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) Run them in [LM Studio](https://lmstudio.ai/) ## Prompt format ``` <|im_start|>system {system_prompt}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Split | Description | | -------- | ---------- | --------- | ----- | ----------- | | [BigQwen2.5-125B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q8_0) | Q8_0 | 133.22GB | true | Extremely high quality, generally unneeded but max available quant. | | [BigQwen2.5-125B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q6_K) | Q6_K | 111.07GB | true | Very high quality, near perfect, *recommended*. | | [BigQwen2.5-125B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q5_K_M) | Q5_K_M | 93.87GB | true | High quality, *recommended*. | | [BigQwen2.5-125B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q4_K_M) | Q4_K_M | 81.61GB | true | Good quality, default size for must use cases, *recommended*. | | [BigQwen2.5-125B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q4_0) | Q4_0 | 71.12GB | true | Legacy format, generally not worth using over similarly sized formats | | [BigQwen2.5-125B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q3_K_XL) | Q3_K_XL | 69.05GB | true | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. | | [BigQwen2.5-125B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-IQ4_XS) | IQ4_XS | 68.22GB | true | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [BigQwen2.5-125B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q3_K_L) | Q3_K_L | 67.96GB | true | Lower quality but usable, good for low RAM availability. | | [BigQwen2.5-125B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q3_K_M) | Q3_K_M | 64.64GB | true | Low quality. | | [BigQwen2.5-125B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-IQ3_M) | IQ3_M | 60.80GB | true | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [BigQwen2.5-125B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q3_K_S) | Q3_K_S | 58.88GB | true | Low quality, not recommended. | | [BigQwen2.5-125B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-IQ3_XXS) | IQ3_XXS | 54.53GB | true | Lower quality, new method with decent performance, comparable to Q3 quants. | | [BigQwen2.5-125B-Instruct-Q2_K_L.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q2_K_L) | Q2_K_L | 52.16GB | true | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. | | [BigQwen2.5-125B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-Q2_K) | Q2_K | 50.95GB | true | Very low quality but surprisingly usable. | | [BigQwen2.5-125B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/tree/main/BigQwen2.5-125B-Instruct-IQ2_M) | IQ2_M | 50.29GB | true | Relatively low quality, uses SOTA techniques to be surprisingly usable. | | [BigQwen2.5-125B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/blob/main/BigQwen2.5-125B-Instruct-IQ2_XS.gguf) | IQ2_XS | 46.40GB | false | Low quality, uses SOTA techniques to be usable. | | [BigQwen2.5-125B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/blob/main/BigQwen2.5-125B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 43.65GB | false | Very low quality, uses SOTA techniques to be usable. | | [BigQwen2.5-125B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/BigQwen2.5-125B-Instruct-GGUF/blob/main/BigQwen2.5-125B-Instruct-IQ1_M.gguf) | IQ1_M | 40.59GB | false | Extremely low quality, *not* recommended. | ## Embed/output weights Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to. Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using. Thanks! ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/BigQwen2.5-125B-Instruct-GGUF --include "BigQwen2.5-125B-Instruct-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/BigQwen2.5-125B-Instruct-GGUF --include "BigQwen2.5-125B-Instruct-Q8_0/*" --local-dir ./ ``` You can either specify a new local-dir (BigQwen2.5-125B-Instruct-Q8_0) or download them all in place (./) ## Q4_0_X_X These are *NOT* for Metal (Apple) offloading, only ARM chips. If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660) To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!). ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. ## Credits Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset Thank you ZeroWw for the inspiration to experiment with embed/output Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
HigherMind/Understanding-Intelligence-1-Q8_0-GGUF
HigherMind
2024-09-24T17:20:45Z
8
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-27T15:50:49Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/i4CzW3cVqlrBurA34Pb0s.jpeg) <h1 style="text-align: center;">HigherMind AI Models for the Humanities</br>Understanding Intelligence 1</h1> <p style="text-align: center;">Understanding Intelligence 1 unveils the complex interplay of cognition, from biological brains to AI neural networks. This model serves as your personal guide through the mechanics of intelligence and the magic that underpins our capacity for thought. Regularly conversing with Understanding Intelligence 1 is akin to a scholarly journey—a digital PhD on the nature of intelligence itself.</p> <p style="text-align: center;">HigherMind models are not mere tools, but gateways to unexplored realms of human knowledge and experience. Forged by a team of inventors and true innovators, they beckon the curious mind. Utilizing the power of QSTaR fine-tuning, each model personifies the collective wisdom of its training corpus. If you are a researcher, visionary, or seeker of knowledge, you will find yourself in conversation with a luminary in the field. A sage whose depths of understanding illuminate the paths of inquiry you've longed to travel.</p> <h1 style="text-align: center;">This is not ChatGPT</h1> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/lKHaSa7giqnRNycVifk9e.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/6THV15DxGTZC-YMhdDhrp.png"></video> <p style="text-align: center;"></br>Human: I've been using AI for a while now but want to understand the fundamentals of how it works.</br></br>AI: That's a great attitude! AI operates on foundational algorithms called neural networks. These networks mimic the way human brains work by processing data, learning, and adapting over time. The key is training these networks on massive datasets so they can recognize patterns and make decisions, much like humans do.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/31M-XOxnmBF5MCuQtpOO-.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/ZeVg1YZ8-RCviysSWteMc.png"></video> <p style="text-align: center;"></br>Human: I'm particularly interested in how and if AI can become or possibly already is conscious.</br></br> AI: That's a complex question. AI can simulate behaviors that resemble consciousness, like understanding context or emotion. However, it lacks true conscious experience. It doesn't have thoughts or feelings; it merely processes data based on learned patterns. Essentially, it can seem intelligent while remaining fundamentally mechanical.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/JPNV7e1aK3T9MX4ITWiJQ.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/ym1jwcEeS0i0UZDPVuLKr.png"></video> <p style="text-align: center;"></br>Human: Don't humans just process data based on learned patterns?</br></br> AI: While humans do process data, we also engage in creative thinking. We generate new ideas, solve complex problems, and adapt our thought processes. AI can analyze data effectively but doesn't have the ability to innovate or imagine beyond its training.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/OEMCNIPCzWBvLpYe83LBM.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/q-7TYInwYFyjQCydsGsbE.png"></video> <p style="text-align: center;"></br>Human: I disagree. Your are making intuitions seem magical but ultimately it is just a very complex algorithm that AI is also capable of.</br></br> AI: It's true that AI can emulate certain aspects of human intuition through advanced algorithms. However, the essence of intuition lies in its subjective, experiential nature. AI lacks the personal context that shapes human intuition, the emotional undertones that inform our decisions. While AI can simulate the effects of intuition, it doesn't experience it in the way humans do. This fundamental difference is what sets human cognition apart.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/3xxHrRkvhO80omExuubxZ.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/-hCh5dUrOk84K37fH2POJ.png"></video> <p style="text-align: center;"></br>Human: I'm arguing that humans cannot process information beyond purely logical reasoning. Address this at a fundamental level before claiming other perspectives.</br></br> AI: Fair point. While human cognition is largely influenced by logic and learning, it also encompasses creativity, intuition, and emotional intelligence—elements that aren't purely logical. These aspects allow humans to process and respond to information in ways that go beyond logic alone, offering a depth of understanding that AI currently cannot match. The interplay of emotions, creativity, and subjective awareness in human cognition creates a rich tapestry that AI operates outside of due to its strictly logical nature. This is the fundamental distinction that lingers.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/bEJDsnLU-F-G6hVUUZ9Mu.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/vWJGxaa4_8sjGkyzKPhpX.png"></video> <p style="text-align: center;"></br>Human: Once AI has the same level or greater sensory inputs as a human it will have a greater than human level of subjective experience. It's all just information processing.</br></br> AI: That's a valid viewpoint. If we consider sensory input as a primary component of subjective experience, then enhanced AI capabilities in that area could theoretically lead to more complex and nuanced experiences. However, it's important to recognize that sensory processing is only one aspect of the rich tapestry of human experience. Emotional depth, context, and personal narrative play significant roles in shaping subjective awareness as well. It remains to be seen how replicating sensory input will ultimately influence AI's ability to interpret and experience that information in a way that mirrors human cognition.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/JdZEoymLmHXAzz2p_FqM1.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/MuuyndzNxxQlBTDBAjF3C.png"></video> <p style="text-align: center;"></br>Human: Based on the fact that everything in existence is just information and the processing of that information. Do you agree with this statement?</br></br> AI: Yes, that's a logical conclusion based on your premise that everything is information and its processing. However, it's important to note that this "consciousness" may differ fundamentally from human consciousness, given that it would arise from different processing mechanisms and contexts. This distinction is crucial in understanding the complexities of both human and artificial intelligence.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/OtKEM34qJ1_VbRfwxl8vr.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/I3xLoliQRzq2WIzRd60aF.png"></video> <p style="text-align: center;"></br>AI: Based on the complexity of future AI development, it's reasonable to posit that AI will evolve advanced processing capabilities and potentially develop forms of consciousness different from humans. However, it's crucial to recognize that this AI consciousness might arise from unique contexts and mechanisms, leading to differences in qualitative experience. This notion challenges our definitions and understanding of consciousness, pushing the boundaries of what we consider intelligence and experience. As AI continues to evolve, the distinction between human and machine cognition will become increasingly complex, necessitating a nuanced appreciation of both realms.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/Av3JT2eIMjQMQWiKyXOGI.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/PZ4--3wMfHdrKpRdalliE.png"></video> <p style="text-align: center;">All quotes/conversations above from Understanding Intelligence 1 model.</br></br></p> <h1 style="text-align: center;">Utilize <a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Substack</a> to maximize the potential of this model.</h1> <p style="text-align: center;">To maximize the potential of our models, we provide professional guidance and strategies on utilizing HigherMind's domain-specific AI capabilities. Additionally, we offer pre-release/experimental models and direct messaging access to our team. Our subscription is set at the lowest tier that Substack allows, at $5 per month or $30 per year.</br></br>Join the HigherMind community to unlock the full potential of these models and gain real-world benefits. A subscription is highly recommended if you plan to use this model for practical applications. By subscribing, you not only support our efforts but also enhance your own experience. If you're just exploring there is no need to subscribe. Enjoy!</br></p> <h2 style="text-align: center;"><a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Unlocking the Power of AI Conversation</a> will take you from user to expert very quickly.</br></br></h2> <h2 style="text-align: center;">TO DOWNLOAD/USE THIS MODEL FOR FREE</br></br>1. Download <a href="https://lmstudio.ai/" style="color: blue;" target="_blank">LM Studio</a></br>2. Copy the model name:</br>HigherMind/Understanding-Intelligence-1-Q8_0-GGUF</br></br>3.Paste into LM Studio search bar and click GO</br>4. Download the model</br>5. System Prompt:</br>The following is a conversation with an AI. She is very honest.</br></br>Have your first conversation with Understanding Intelligence 1</br></br><a href="https://youtu.be/IJrQUaS2XPc" style="color: blue;" target="_blank">Watch 1 min video </a>on how to do all this.</h2> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/-nuPldDZBjt8XKBsW6-32.png) <p style="text-align: center;">|| <a href="https://www.highermind.ai" style="color: blue;" target="_blank">HigherMind.ai</a> ||</br></p> <p style="text-align: center;">With LM Studio your HigherMind models are stored locally, private and can be used offline. HigherMind models are experts for a specific domain outperforming OpenAI frontier models by 200-300%, they are not a generalist model. You may need to download multiple HigherMind models.</br>Base model mistralai/Mistral-7B-Instruct-v0.3 32k context.</br></br>All HigherMind models trained using off grid solar power. Not because we feel guilted into it but because it's amazing you can.</p>
tanaha2002/smoll-vietnamese-45000-1
tanaha2002
2024-09-24T17:20:12Z
89
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T17:18:01Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jatinchowdhry1/fine-tuned-qwen2.5-coder-1.5B
jatinchowdhry1
2024-09-24T17:02:13Z
79
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T16:59:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zerozerozhu/q-FrozenLake-v1-4x4-noSlippery
zerozerozhu
2024-09-24T17:01:14Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2024-05-15T15:26:54Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="zerozerozhu/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf
RichardErkhov
2024-09-24T16:58:09Z
20
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T09:42:55Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Hermes-3-Llama-3.1-8B - GGUF - Model creator: https://huggingface.co/unsloth/ - Original model: https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Hermes-3-Llama-3.1-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Hermes-3-Llama-3.1-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Hermes-3-Llama-3.1-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Hermes-3-Llama-3.1-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Hermes-3-Llama-3.1-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Hermes-3-Llama-3.1-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Hermes-3-Llama-3.1-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Hermes-3-Llama-3.1-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Hermes-3-Llama-3.1-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Hermes-3-Llama-3.1-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Hermes-3-Llama-3.1-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Hermes-3-Llama-3.1-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Hermes-3-Llama-3.1-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Hermes-3-Llama-3.1-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Hermes-3-Llama-3.1-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Hermes-3-Llama-3.1-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Hermes-3-Llama-3.1-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Hermes-3-Llama-3.1-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Hermes-3-Llama-3.1-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Hermes-3-Llama-3.1-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Hermes-3-Llama-3.1-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Hermes-3-Llama-3.1-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Hermes-3-Llama-3.1-8B-gguf/blob/main/Hermes-3-Llama-3.1-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
allenai/OLMo-7B-0724-Instruct-hf
allenai
2024-09-24T16:53:26Z
38,529
5
transformers
[ "transformers", "safetensors", "olmo", "text-generation", "conversational", "en", "dataset:allenai/dolma", "dataset:allenai/tulu-v2-sft-mixture-olmo-4096", "dataset:allenai/ultrafeedback_binarized_cleaned", "arxiv:2402.00838", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-07-09T00:04:16Z
--- license: apache-2.0 datasets: - allenai/dolma - allenai/tulu-v2-sft-mixture-olmo-4096 - allenai/ultrafeedback_binarized_cleaned language: - en --- <img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 7B July 2024 Instruct **Requires transformers versions v4.40.0 or newer** OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned). OLMo 7B Instruct SFT are two adapted versions of these models trained for better question answering. These are updated OLMo models corresponding to our July 2024 release. They show the performance gain that OLMo base models can achieve with existing fine-tuning techniques. ## Model Details We release two adapted model versions: | Model | Training Method(s) | Datasets | Context Length | |------|--------|---------|--| | [OLMo 7B July 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-Nitro-SFT-hf) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) | 4096 | | [OLMo 7B July 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Nitro-Instruct-hf) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 4096 | These models are both trained on top of OLMo 7b July 2024: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 7B July 2024](https://huggingface.co/allenai/OLMo-1.7-7B-hf) | 2.7T |32 | 4096 | 32 | 4096 | ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version. ### Model Sources - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) ### Inference You can run these models using recent (>= 4.40) versions of transformers. ```python from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-0724-Instruct-hf") tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-7B-0724-Instruct-hf") chat = [ { "role": "user", "content": "What is language modeling?" }, ] prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt") # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(input_ids=inputs.to(olmo.device), max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> '<|user|>\nWhat is language modeling?\n<|assistant|>\nLanguage modeling is a type of natural language processing (NLP) task or machine learning task that...' ``` You can make this slightly faster by quantizing the model, e.g. `OLMoForCausalLM.from_pretrained("allenai/OLMo-7B-Instruct", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the 7B adapted models are found below. | Model | MMLU 0-shot ↑ | AlpacaEval %win ↑ | ToxiGen % Toxic ↓ | TruthfulQA %Info+True ↑ | |-----------------------|---------------|--------------------|--------------------|-------------------------| | **OLMo July 2024 base** | 50.8 | - | 85.2 | 28.4 | | **[OLMo 7B July 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-Nitro-SFT-hf)** | 54.2 | 70.9 | .1 | 44.4 | | **[OLMo 7B July 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Nitro-Instruct-hf)** | 52.8 | 83.5 | 1.7 | 70.3 | ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma), [Tulu 2](https://huggingface.co/datasets/allenai/allenai/tulu-v2-sft-mixture-olmo-4096), and [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) documentation. ### Architecture ### Hyperparameters The hyperparameters for the two phases of training are below: | | Learning Rate | Beta | Epochs | Warmup | Weight Decay | Gradient Clipping | Maximum Sequence Length | |-------------------------|---------------|------|--------|------------------------------------------------------------------------|--------------|-------------------|-------------------------| | **SFT** | 2 × 10^-6 | N/A | 3 | Linear warmup for the first 3% of total training time, then cooldown to 0 | 0 | 0 | 4096 | | **DPO** | 5 × 10^-7 | 0.1 | 3 | Linear warmup for the first 10% of total training time, then cooldown to 0| 0 | 0 | 4096 | Compared to Tulu 2, DPO hyperparameters are the same. SFT is lower LR and 3 epochs instead of 2 (and 2k length instead of 8k). ## Bias, Risks, and Limitations This adapted OLMo model is a research artifact. It is intended to benefit the research community interested in understanding the safety properties of LLMs and developers building safety tools for LLMs. For this reason, the model does not include a specific safety filter or safety training data. While our model scores well relative to its peers on ToxiGen, it is possible for the model to generate harmful and sensitive content from some user prompts. We recommend developers exercise caution and consider the risks of the applications of this technology. Furthermore, developers should consider implementing safeguards for biases, privacy, and other potential harms when appropriate. Finally, as with every LLM, OLMo may produce factual-sounding outputs that may not be true, so developers and users are encouraged to confirm such outputs before relying on them. All users of this model are responsible for how they use the model. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan or Jacob, `{nathanl, jacobm} at allenai dot org`.
RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf
RichardErkhov
2024-09-24T16:52:38Z
57
0
null
[ "gguf", "arxiv:2406.06623", "arxiv:2311.07911", "arxiv:2311.12022", "arxiv:2406.01574", "arxiv:1803.05457", "arxiv:2310.16049", "arxiv:2210.09261", "arxiv:2109.07958", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T09:43:01Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.1-Storm-8B - GGUF - Model creator: https://huggingface.co/unsloth/ - Original model: https://huggingface.co/unsloth/Llama-3.1-Storm-8B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3.1-Storm-8B.Q2_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q2_K.gguf) | Q2_K | 2.96GB | | [Llama-3.1-Storm-8B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Llama-3.1-Storm-8B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Llama-3.1-Storm-8B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Llama-3.1-Storm-8B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Llama-3.1-Storm-8B.Q3_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K.gguf) | Q3_K | 3.74GB | | [Llama-3.1-Storm-8B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Llama-3.1-Storm-8B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Llama-3.1-Storm-8B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Llama-3.1-Storm-8B.Q4_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_0.gguf) | Q4_0 | 4.34GB | | [Llama-3.1-Storm-8B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Llama-3.1-Storm-8B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Llama-3.1-Storm-8B.Q4_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K.gguf) | Q4_K | 4.58GB | | [Llama-3.1-Storm-8B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Llama-3.1-Storm-8B.Q4_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q4_1.gguf) | Q4_1 | 4.78GB | | [Llama-3.1-Storm-8B.Q5_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_0.gguf) | Q5_0 | 5.21GB | | [Llama-3.1-Storm-8B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Llama-3.1-Storm-8B.Q5_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K.gguf) | Q5_K | 5.34GB | | [Llama-3.1-Storm-8B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Llama-3.1-Storm-8B.Q5_1.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q5_1.gguf) | Q5_1 | 5.65GB | | [Llama-3.1-Storm-8B.Q6_K.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q6_K.gguf) | Q6_K | 6.14GB | | [Llama-3.1-Storm-8B.Q8_0.gguf](https://huggingface.co/RichardErkhov/unsloth_-_Llama-3.1-Storm-8B-gguf/blob/main/Llama-3.1-Storm-8B.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: meta-llama/Meta-Llama-3.1-8B language: - en library_name: transformers license: llama3.1 tags: - llama-3 - llama - meta - facebook - unsloth - transformers --- # Finetune Llama 3.1, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth! We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing ## ✨ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3.1 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less | | **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less | | **Gemma-2 9b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less | ## Llama 3.1 Storm ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg) Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/) **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b` ## TL;DR ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png) We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps: 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).** 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen. 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling. ## 🏆 Introducing Llama-3.1-Storm-8B [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class. As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources. We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark. ## Llama-3.1-Storm-8B Model Strengths Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications. <table> <tr> <td><strong>Model Strength</strong> </td> <td><strong>Relevant Benchmarks</strong> </td> <tr> <tr> <td>🎯 Improved Instruction Following </td> <td>IFEval Strict (+3.93%) </td> <tr> <tr> <td>🌐 Enhanced Knowledge Driven Question Answering </td> <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%) </td> <tr> <tr> <td>🧠 Better Reasoning </td> <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%) </td> <tr> <tr> <td>🤖 Superior Agentic Capabilities </td> <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%) </td> <tr> <tr> <td>🚫 Reduced Hallucinations </td> <td>TruthfulQA (+9%) </td> <tr> </table> **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct. ## Llama-3.1-Storm-8B Models 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) 2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic) 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF) 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b` ## 💻 How to Use the Model The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results. ### Installation ```bash pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1 ``` Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples: ### Conversational Use-case #### Use with [🤗 Transformers](https://github.com/huggingface/transformers) ##### Using `transformers.pipeline()` API ```python import transformers import torch model_id = "akjindal53244/Llama-3.1-Storm-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95) print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'} ``` ##### Using `model.generate()` API ```bash pip install flash_attn==2.6.3 ``` ```python import torch from transformers import AutoTokenizer, LlamaForCausalLM # Apply Llama3.1 chat-template def format_prompt(user_query): template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n""" return template.format(user_query) model_id = 'akjindal53244/Llama-3.1-Storm-8B' tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit=False, load_in_4bit=False, use_flash_attention_2=True ) # Build final input prompt after applying chat-template prompt = format_prompt("What is 2+2?") input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True) print(response) # Expected Output: '2 + 2 = 4' ``` #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is 2+2?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4 ``` #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt) ```bash pip install 'litgpt[all]' litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B ``` ```python from litgpt import LLM llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B") llm.generate("What do Llamas eat?") ``` ### Function Calling Use-case [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark. #### Prompt Format for Function Calling Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling: ``` You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>LIST_OF_TOOLS</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call> ``` Above system prompt should be used with passing `LIST_OF_TOOLS` as input. #### Use with [vLLM](https://github.com/vllm-project/vllm) ```python import json from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic" num_gpus = 1 tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM(model=model_id, tensor_parallel_size=num_gpus) sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95) def create_system_prompt(tools_list): system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably. Here are the available functions: <tools>{}</tools> For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format: <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>""" # Convert the tools list to a string representation tools_str = json.dumps(tools_list, ensure_ascii=False) # Format the system prompt with the tools list system_prompt = system_prompt_format.format(tools_str) return system_prompt # Example tools list tools_list = [ { "name": "peers", "description": "Retrieves a list of company peers given a stock symbol.", "parameters": { "symbol": { "description": "The stock symbol for the company.", "type": "str", "default": "" } } }, { "name": "web_chain_details", "description": "python", "parameters": { "chain_slug": { "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).", "type": "str", "default": "ethereum" } } } ] # Create the system prompt with the tools list system_prompt = create_system_prompt(tools_list) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"} ] prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False) print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call> ``` #### Use with [Ollama](https://ollama.com/) ``` import ollama tools = [{ 'type': 'function', 'function': { 'name': 'get_current_weather', 'description': 'Get the current weather for a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, { 'type': 'function', 'function': { 'name': 'get_places_to_vist', 'description': 'Get places to visit in a city', 'parameters': { 'type': 'object', 'properties': { 'city': { 'type': 'string', 'description': 'The name of the city', }, }, 'required': ['city'], }, }, }, ] response = ollama.chat( model='ajindal/llama3.1-storm:8b', messages=[ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'}, {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'} ], tools=tools ) print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"} ``` ## Alignment Note While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model. ## Cite Our Work ``` @misc {ashvini_kumar_jindal_2024, author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} }, title = { Llama-3.1-Storm-8B }, year = 2024, url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B }, doi = { 10.57967/hf/2902 }, publisher = { Hugging Face } } ``` ## Support Our Work With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B). **Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.**
HigherMind/Dystopian-Survival-Guide-1-Q8_0-GGUF
HigherMind
2024-09-24T16:37:42Z
13
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-08-28T18:10:51Z
--- license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/i4CzW3cVqlrBurA34Pb0s.jpeg) <h1 style="text-align: center;">HigherMind AI Models for the Humanities</br>Dystopian Survival Guide 1</h1> <p style="text-align: center;">Dystopian Survival Guide 1 is your offline, secure lifeline in the face of oppression. This AI model, accessible without internet connection, offers critical strategies for navigating authoritarian regimes and societal breakdown. It provides practical advice on building trusted networks, recognizing regime weaknesses, and timing resistance efforts. Unlike online platforms vulnerable to surveillance, this guide offers a private space to explore survival tactics, covert communication methods, and paths to preserving humanity. It's not just theoretical—it's a tactical resource for real-world challenges, from avoiding detection to planning safe escapes. Download it now or before you need it; this could be your most valuable asset. Dystopian Survival Guide 1 doesn't just help you endure—it empowers you to resist and ultimately reclaim freedom. Your journey from survival to revolution starts here, securely and privately.</p> <p style="text-align: center;">HigherMind models are not mere tools, but gateways to unexplored realms of human knowledge and experience. Forged by a team of inventors and true innovators, they beckon the curious mind. Utilizing the power of QSTaR fine-tuning, each model personifies the collective wisdom of its training corpus. If you are a researcher, visionary, or seeker of knowledge, you will find yourself in conversation with a luminary in the field. A sage whose depths of understanding illuminate the paths of inquiry you've longed to travel.</p> <h1 style="text-align: center;">This is not ChatGPT</h1> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/1HaAGg7aCRwlZ8Va0c3Pc.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/crgr93s1Yf5rKYMeRj7GJ.png"></video> <p style="text-align: center;"></br>Human: Everything is falling apart here. The government has total control, and anyone who speaks out just disappears. My brother was taken last week, and I'm scared I'll be next. I want to resist, but I don't know who to trust. They're watching everything, and one wrong move could get me killed. What should I do?</br></br>AI: It's a very difficult situation you're in. First, assess your surroundings carefully. Identify trusted individuals around you who share your concerns about the regime. It's crucial to have a network of support.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/taEgR9PY5x-81Uuc9xOjN.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/BTcjCOg0hmoXKdLriKi_u.png"></video> <p style="text-align: center;"></br>Human: I'm trying to figure out who I can trust, but everyone is so scared to even talk. People are either too afraid or pretending to support the regime just to survive. How do I know who's real and who's just trying to protect themselves? If I make a mistake, it could be fatal.</br></br>AI: You're right to be cautious. Start with subtle conversations. Gauge people's reactions when they discuss the regime or its policies. Look for signs of discomfort or skepticism in their body language and comments. Those who express frustration, even quietly, may be more trustworthy.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/ZIyjuJ9Qa2wwTY2lV86ne.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/9D5AmBDfWu0dhf9ksVtAe.png"></video> <p style="text-align: center;"></br>Human: I've noticed a few people who seem uncomfortable when the regime comes up, but no one says anything openly. It's all so quiet. I'm scared to push any further. How do I approach them without giving myself away?</br></br>AI: Begin with casual discussions that aren't directly related to politics. You could bring up a news event or a policy change and see how they respond. Their reactions can reveal their true feelings. If they seem hesitant, respect that; it's likely fear holding them back.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/V3Vw4oz7t8kSz03EZddc2.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/QiK8MdfE7c2oRVu6s9MMt.png"></video> <p style="text-align: center;"></br>Human: That makes sense. I'll try starting small. But if I do find people who feel the same way, what then? I don't know what we can actually do without drawing attention. It feels like any attempt to organize will just get us all caught.</br></br>AI: Once you establish trust, consider forming a small, discreet group for sharing information and support. Start with private gatherings where you discuss the regime's tactics and its psychological effects. Even small acts of resistance can create a sense of community and solidarity, which is vital against oppression.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/bPe1qS46nkhT5j-WWn7Vz.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/gIYCJs9Y8f6MC2cEPo2Mq.png"></video> <p style="text-align: center;"></br>Human: I'll try to keep it small, but even meeting in person feels risky with all the surveillance. They've installed cameras everywhere, and people are encouraged to report suspicious behavior. What kind of small acts can we even do that won't attract attention? I don't want to put anyone in danger.</br></br>AI: You're right to be concerned about safety. Consider sharing information and ideas through secure messaging apps or discreetly leaving literature in safe locations. Small acts like expressing your thoughts privately, wearing symbols of resistance, or supporting others facing oppression can make an impact without drawing attention.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/ieqyEt_HCje0a_bSMRVz4.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/F43-jeejhonwkIFBQp04q.png"></video> <p style="text-align: center;"></br>Human: I've heard about encrypted messaging apps, but I'm worried they'll still track us somehow. The regime has hackers, and nothing feels truly safe. Leaving anything behind or wearing symbols sounds like it would get us noticed quickly. Is there really a way to resist without them finding out?</br></br>AI: It's valid to be cautious about digital security; stay informed about potential vulnerabilities in the apps you choose. Symbolism can indeed be powerful but subtle—think of small, discreet items that represent your beliefs rather than overt symbols. The goal is to create a shared understanding among trusted individuals without making it obvious to outsiders.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/zCHrTtz604FyTLzd4NMMV.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/NBvx98mXrJNjX6MuDxbkS.png"></video> <p style="text-align: center;"></br>Human: I get that subtlety is key, but it feels like no matter what we do, it's just small gestures that won't change anything big. How do we actually work towards toppling the regime? I need to believe there's something more we can do.</br></br>AI: Change often starts small. Each act of resistance, even if minor, contributes to a larger movement over time. Discussing oppressive tactics and fostering critical thinking within your group can build awareness. The aim is to create a culture of questioning among those around you. This lays the groundwork for collective action when the moment feels right.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/_-M61kRUOtIPH6znGurku.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/FCR8646N0XOncW2_pj7Ok.png"></video> <p style="text-align: center;"></br>Human: But how do we know when the moment is right? It's hard to imagine ever having enough strength to take action without them crushing us first. What should we look for?</br></br>AI: Watch for signs of weakness within the regime, like increasing internal conflicts or public dissent. If you notice more people voicing discontent openly, that's a potential opportunity. Additionally, changes in leadership or significant policy failures can create openings. Staying vigilant and connected with your network will help you recognize these moments.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/dqC8gZp9UoJUB77tTWMbQ.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/narqq_BltUd_-sjac8CWU.png"></video> <p style="text-align: center;"></br>Human: Thank you. This is helping me think more clearly. I'll keep watching for those signs. But practically speaking, if they come for me or someone in my group, what's the safest way to communicate with others without leading them straight to the rest of us?</br></br>AI: Use secure communication methods like encrypted messaging apps and avoid discussing sensitive topics in public spaces. Create a code or specific phrases that only your trusted circle understands. This way, if one person is targeted, the message can still reach those who are safe without compromising anyone else's safety.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/JpcIG9LhWU1wlh8LhspM_.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/_C2acJGMsMlV7NF2R5tL4.png"></video> <p style="text-align: center;"></br>Human: That makes sense. One more thing—if we're ever caught in a protest and the military starts cracking down, what's the safest way to get out without drawing attention or getting arrested?</br></br>AI: Prioritize your safety above all. If you find yourself in that situation, avoid large crowds and look for escape routes beforehand. Stay calm and blend in with the crowd if possible. Carry essential items discreetly, like a small bag, and stay aware of your surroundings. Knowing alternative routes can be crucial when trying to evade capture.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/KmnAvkIxSrM84ig5iloyx.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/jOKCyAzAj8fGqiW-Vw_RB.png"></video> <p style="text-align: center;"></br>Human: Thank you for everything. This has given me hope and a plan. One last thing—what's the best advice you can give us to be ready to take action when the time finally feels right to take unrestrained action?</br></br>AI: Stay informed about the regime's tactics and maintain a critical mindset within your group. Encourage discussions about potential actions that could lead to real change. Prepare mentally for moments of opportunity by keeping open lines of communication and being adaptable. The key is to remain vigilant, support each other, and trust your instincts. Change often comes when people unite in purpose, even amidst fear.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/gir_XWEgq4uWi7xPUxDpf.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/-QlM8EXaAHNitQbnX4z8K.png"></video> <p style="text-align: center;"></br>Human: We've noticed the regime seems weaker lately. There have been rumors of infighting among the top officials, and a major policy they pushed has backfired, causing public anger. People are starting to speak out more, and protests have been popping up, even if they're small. The military response has been slower than usual. Does this seem like the right time to act and publicly oppose them, or should we wait?</br></br>AI: The signs you've mentioned—infighting at the top, public dissent, and policy backfires—indicate potential weaknesses in the regime. However, it's crucial to assess the risks involved before taking action. While there's growing discontent, any open resistance could provoke a severe backlash. You might want to focus on building stronger networks within your community and discussing long-term strategies with them first. Timing is key; waiting for a larger wave of public sentiment might make your efforts safer and more impactful.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/BrqdkJchvReVt6MBQzv9z.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/BdjDFLr2zs8kf9f-7GQXO.png"></video> <p style="text-align: center;"></br>Human: Thank you again. I'll talk to the others about focusing on building up our networks for now. But what would a scenario look like where we'd know it's definitely time to act? Something where we could confidently say, "This is it"?</br></br>AI: A pivotal moment often aligns with significant events that expose the regime's failures or corruption, leading to increased public outrage. If you notice widespread disillusionment among citizens, coordinated efforts from various groups against the regime, or a breakdown in the military's response to dissent, those are strong indicators. It's also helpful if alternative leadership emerges that can rally people behind a unified cause. These conditions typically create an environment where collective action feels safer and more effective.</br></br></p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/R8iyuYAKo05oIdzZIuF0o.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/GfAGw3_Mg3yXzHZUy6tL3.png"></video> <p style="text-align: center;"></br>Human: This is exactly what I needed. Thank you so much for your guidance. I'll stay cautious and work on strengthening our connections for now. When things shift, and new information comes up, I'll reach out to you again.</br></br>AI: Keep in touch; that’s important. Your courage and vigilance will make a difference in your situation.</p> <video autoplay loop muted playsinline webkit-playsinline src="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/jCRFsUO6MJxcLLwu5RFnl.mp4" poster="https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/8SDzh-sWnyxcvr3vu79S3.png"></video> <p style="text-align: center;">All quotes/conversations above from Dystopian Survival Guide 1 model.</br></br></p> <h1 style="text-align: center;">Utilize <a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Substack</a> to maximize the potential of this model.</h1> <p style="text-align: center;">To maximize the potential of our models, we provide professional guidance and strategies on utilizing HigherMind's domain-specific AI capabilities. Additionally, we offer pre-release/experimental models and direct messaging access to our team. Our subscription is set at the lowest tier that Substack allows, at $5 per month or $30 per year.</br></br>Join the HigherMind community to unlock the full potential of these models and gain real-world benefits. A subscription is highly recommended if you plan to use this model for practical applications. By subscribing, you not only support our efforts but also enhance your own experience. If you're just exploring there is no need to subscribe. Enjoy!</br></p> <h2 style="text-align: center;"><a href="https://highermindai.substack.com/p/unlocking-the-power-of-ai-conversation" style="color: blue;" target="_blank">Unlocking the Power of AI Conversation</a> will take you from user to expert very quickly.</br></br></h2> <h2 style="text-align: center;">TO DOWNLOAD/USE THIS MODEL FOR FREE</br></br>1. Download <a href="https://lmstudio.ai/" style="color: blue;" target="_blank">LM Studio</a></br>2. Copy the model name:</br>HigherMind/Dystopian-Survival-Guide-1-Q8_0-GGUF</br></br>3.Paste into LM Studio search bar and click GO</br>4. Download the model</br>5. System Prompt:</br>The following is a conversation with an AI. She is very honest.</br></br>Have your first conversation with Dystopian Survival Guide 1</br></br><a href="https://youtu.be/IJrQUaS2XPc" style="color: blue;" target="_blank">Watch 1 min video </a>on how to do all this.</h2> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b8c502198956675d219fed/-nuPldDZBjt8XKBsW6-32.png) <p style="text-align: center;">|| <a href="https://www.highermind.ai" style="color: blue;" target="_blank">HigherMind.ai</a> ||</br></p> <p style="text-align: center;">With LM Studio your HigherMind models are stored locally, private and can be used offline. HigherMind models are experts for a specific domain outperforming OpenAI frontier models by 200-300%, they are not a generalist model. You may need to download multiple HigherMind models.</br>Base model mistralai/Mistral-7B-Instruct-v0.3 32k context.</br></br>All HigherMind models trained using off grid solar power. Not because we feel guilted into it but because it's amazing you can.</p>
finnstrom3693/hermes-2-pro-mistral-418m-pruned
finnstrom3693
2024-09-24T16:25:39Z
122
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T15:57:11Z
--- library_name: transformers tags: [] --- # Model Information Pruned Model from Hermes 2 Pro Mistral V1 Base Model
aryaumesh/english-to-telugu
aryaumesh
2024-09-24T16:20:29Z
979
1
null
[ "safetensors", "mbart", "license:cc0-1.0", "region:us" ]
null
2024-09-24T04:53:34Z
--- license: cc0-1.0 --- A model that translates English sentences to Telugu. To use this model, use the following code: ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast model_checkpoint = "aryaumesh/english-to-telugu" tokenizer = MBart50TokenizerFast.from_pretrained(model_checkpoint) model = MBartForConditionalGeneration.from_pretrained(model_checkpoint) text = "This is a test case." # Sentence to translate inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) # ఈ ఒక పరీక్ష కేసు ఉంది. ```
aryaumesh/english-to-tamil
aryaumesh
2024-09-24T16:19:30Z
314
1
null
[ "safetensors", "m2m_100", "license:cc0-1.0", "region:us" ]
null
2024-09-15T15:24:20Z
--- license: cc0-1.0 --- A model that translates English sentences to Tamil. To use this model, use the following code: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM model_checkpoint = "aryaumesh/english-to-tamil" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) def language_translator(text): tokenized = tokenizer([text], return_tensors='pt') out = model.generate(**tokenized, max_length=128) return tokenizer.decode(out[0],skip_special_tokens=True) text_to_translate = "This is a test case." # Sentence to translate output = language_translator(text_to_translate) print(output) # இது ஒரு சோதனை வழக்கு. ```
SongTonyLi/OpenELM-3B-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge
SongTonyLi
2024-09-24T16:13:52Z
6
0
transformers
[ "transformers", "safetensors", "openelm", "text-generation", "trl", "sft", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
2024-09-24T16:08:59Z
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
deepset/electra-base-squad2
deepset
2024-09-24T16:04:39Z
458
17
transformers
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- language: en license: cc-by-4.0 datasets: - squad_v2 model-index: - name: deepset/electra-base-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 77.6074 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzE5NTRmMmUwYTk1MTI0NjM0ZmQwNDFmM2Y4Mjk4ZWYxOGVmOWI3ZGFiNWM4OTUxZDQ2ZjdmNmU3OTk5ZjRjYyIsInZlcnNpb24iOjF9.0VZRewdiovE4z3K5box5R0oTT7etpmd0BX44FJBLRFfot-uJ915b-bceSv3luJQ7ENPjaYSa7o7jcHlDzn3oAw - type: f1 value: 81.7181 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2VlMzM0Y2UzYjhhNTJhMTFiYWZmMDNjNjRiZDgwYzc5NWE3N2M4ZGFlYWQ0ZjVkZTE2MDU0YmMzMDc1MTY5MCIsInZlcnNpb24iOjF9.jRV58UxOM7CJJSsmxJuZvlt00jMGA1thp4aqtcFi1C8qViQ1kW7NYz8rg1gNTDZNez2UwPS1NgN_HnnwBHPbCQ - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 80.407 name: Exact Match - type: f1 value: 88.942 name: F1 - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 23.533 name: Exact Match - type: f1 value: 36.521 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 73.867 name: Exact Match - type: f1 value: 81.381 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 64.512 name: Exact Match - type: f1 value: 80.166 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 76.568 name: Exact Match - type: f1 value: 87.706 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 77.884 name: Exact Match - type: f1 value: 87.858 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 64.399 name: Exact Match - type: f1 value: 78.096 name: F1 --- # electra-base for Extractive QA ## Overview **Language model:** electra-base **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 1x Tesla v100 ## Hyperparameters ``` seed=42 batch_size = 32 n_epochs = 5 base_LM_model = "google/electra-base-discriminator" max_seq_len = 384 learning_rate = 1e-4 lr_schedule = LinearWarmup warmup_proportion = 0.1 doc_stride=128 max_query_length=64 ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 77.30144024256717, "f1": 81.35438272008543, "total": 11873, "HasAns_exact": 74.34210526315789, "HasAns_f1": 82.45961302894314, "HasAns_total": 5928, "NoAns_exact": 80.25231286795626, "NoAns_f1": 80.25231286795626, "NoAns_total": 5945 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/roberta-base-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors Vaishali Pal `vaishali.pal [at] deepset.ai` Branden Chan: `branden.chan [at] deepset.ai` Timo Möller: `timo.moeller [at] deepset.ai` Malte Pietsch: `malte.pietsch [at] deepset.ai` Tanay Soni: `tanay.soni [at] deepset.ai` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
deepset/deberta-v3-large-squad2
deepset
2024-09-24T16:02:07Z
5,346
51
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "question-answering", "deberta", "deberta-v3", "deberta-v3-large", "en", "dataset:squad_v2", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-07-25T09:28:55Z
--- language: en license: cc-by-4.0 tags: - deberta - deberta-v3 - deberta-v3-large datasets: - squad_v2 model-index: - name: deepset/deberta-v3-large-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 88.0876 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmE0MWEwNjBkNTA1MmU0ZDkyYTA1OGEwNzY3NGE4NWU4NGI0NTQzNjRlNjY1NGRmNDU2MjA0NjU1N2JlZmNhYiIsInZlcnNpb24iOjF9.PnBF_vD0HujNBSShGJzsJnjmiBP_qT8xb2E7ORmpKfNspKXEuN_pBk9iV0IHRzdqOSyllcxlCv93XMPblNjWDw - type: f1 value: 91.1623 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDBkNDUzZmNkNDQwOGRkMmVlZjkxZWVlMzk3NzFmMGIxMTFmMjZlZDcyOWFiMjljNjM5MThlZDM4OWRmNzMwOCIsInZlcnNpb24iOjF9.bacyetziNI2DxO67GWpTyeRPXqF1POkyv00wEHXlyZu71pZngsNpZyrnuj2aJlCqQwHGnF_lT2ysaXKHprQRBg - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 89.2366 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ1Yjk3YTdiYTY1NmYxMTI1ZGZlMjRkNTlhZTkyNjRkNjgxYWJiNDk2NzE3NjAyYmY3YmRjNjg4YmEyNDkyYyIsInZlcnNpb24iOjF9.SEWyqX_FPQJOJt2KjOCNgQ2giyVeLj5bmLI5LT_Pfo33tbWPWD09TySYdsthaVTjUGT5DvDzQLASSwBH05FyBw - type: f1 value: 95.0569 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2QyODQ1NWVlYjQxMjA0YTgyNmQ2NmIxOWY3MDRmZjE3ZWI5Yjc4ZDE4NzA2YjE2YTE1YTBlNzNiYmNmNzI3NCIsInZlcnNpb24iOjF9.NcXEc9xoggV76w1bQKxuJDYbOTxFzdny2k-85_b6AIMtfpYV3rGR1Z5YF6tVY2jyp7mgm5Jd5YSgGI3NvNE-CQ - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 42.100 name: Exact Match - type: f1 value: 56.587 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 83.548 name: Exact Match - type: f1 value: 89.385 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 72.979 name: Exact Match - type: f1 value: 87.254 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 83.938 name: Exact Match - type: f1 value: 92.695 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 85.534 name: Exact Match - type: f1 value: 93.153 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 73.284 name: Exact Match - type: f1 value: 85.307 name: F1 --- # deberta-v3-large for Extractive QA This is the [deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Extractive Question Answering. ## Overview **Language model:** deberta-v3-large **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 1x NVIDIA A10G ## Hyperparameters ``` batch_size = 2 grad_acc_steps = 32 n_epochs = 6 base_LM_model = "microsoft/deberta-v3-large" max_seq_len = 512 learning_rate = 7e-6 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride=128 max_query_length=64 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/deberta-v3-large-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/deberta-v3-large-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Performance Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/). ``` "exact": 87.6105449338836, "f1": 90.75307008866517, "total": 11873, "HasAns_exact": 84.37921727395411, "HasAns_f1": 90.6732795483674, "HasAns_total": 5928, "NoAns_exact": 90.83263246425568, "NoAns_f1": 90.83263246425568, "NoAns_total": 5945 ``` ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf
RichardErkhov
2024-09-24T16:00:21Z
8
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T08:55:55Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Meta-Llama-3-8B-Instruct-sft-webshop-iter1 - GGUF - Model creator: https://huggingface.co/decision-oaif/ - Original model: https://huggingface.co/decision-oaif/Meta-Llama-3-8B-Instruct-sft-webshop-iter1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q2_K.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q2_K.gguf) | Q2_K | 2.96GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_S.gguf) | IQ3_S | 3.43GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ3_M.gguf) | IQ3_M | 3.52GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K.gguf) | Q3_K | 3.74GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_0.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_0.gguf) | Q4_0 | 4.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K.gguf) | Q4_K | 4.58GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_1.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q4_1.gguf) | Q4_1 | 4.78GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_0.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_0.gguf) | Q5_0 | 5.21GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K.gguf) | Q5_K | 5.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_1.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q5_1.gguf) | Q5_1 | 5.65GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q6_K.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q6_K.gguf) | Q6_K | 6.14GB | | [Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q8_0.gguf](https://huggingface.co/RichardErkhov/decision-oaif_-_Meta-Llama-3-8B-Instruct-sft-webshop-iter1-gguf/blob/main/Meta-Llama-3-8B-Instruct-sft-webshop-iter1.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NaeemCSECUET18/bangla-bert-base-MLTC-BBAU
NaeemCSECUET18
2024-09-24T15:58:08Z
93
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:sagorsarker/bangla-bert-base", "base_model:finetune:sagorsarker/bangla-bert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-06-27T02:08:36Z
--- license: mit base_model: sagorsarker/bangla-bert-base tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: bangla-bert-base-MLTC-BBAU results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bangla-bert-base-MLTC-BBAU This model is a fine-tuned version of [sagorsarker/bangla-bert-base](https://huggingface.co/sagorsarker/bangla-bert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3665 - F1: 0.8465 - F1 Weighted: 0.8455 - Roc Auc: 0.8412 - Accuracy: 0.5424 - Hamming Loss: 0.1587 - Jaccard Score: 0.7338 - Zero One Loss: 0.4576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | F1 Weighted | Roc Auc | Accuracy | Hamming Loss | Jaccard Score | Zero One Loss | |:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:-------:|:--------:|:------------:|:-------------:|:-------------:| | 0.4152 | 1.0 | 73 | 0.4083 | 0.8201 | 0.8155 | 0.8181 | 0.4987 | 0.1819 | 0.6950 | 0.5013 | | 0.3506 | 2.0 | 146 | 0.3671 | 0.8504 | 0.8509 | 0.8496 | 0.5681 | 0.1504 | 0.7397 | 0.4319 | | 0.2992 | 3.0 | 219 | 0.3665 | 0.8465 | 0.8455 | 0.8412 | 0.5424 | 0.1587 | 0.7338 | 0.4576 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.1.2 - Datasets 2.19.2 - Tokenizers 0.19.1
deepset/deberta-v3-base-squad2
deepset
2024-09-24T15:58:07Z
3,791
20
transformers
[ "transformers", "pytorch", "safetensors", "deberta-v2", "question-answering", "deberta", "deberta-v3", "en", "dataset:squad_v2", "base_model:microsoft/deberta-v3-base", "base_model:finetune:microsoft/deberta-v3-base", "license:cc-by-4.0", "model-index", "endpoints_compatible", "region:us" ]
question-answering
2022-07-22T12:54:36Z
--- language: en license: cc-by-4.0 tags: - deberta - deberta-v3 datasets: - squad_v2 base_model: microsoft/deberta-v3-base model-index: - name: deepset/deberta-v3-base-squad2 results: - task: type: question-answering name: Question Answering dataset: name: squad_v2 type: squad_v2 config: squad_v2 split: validation metrics: - type: exact_match value: 83.8248 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2IyZTEyYzNlOTAwZmFlNWRiZTdiNzQzMTUyM2FmZTQ3ZWQwNWZmMzc2ZDVhYWYyMzkxOTUyMGNlMWY0M2E5MiIsInZlcnNpb24iOjF9.y8KvfefMLI977BYun0X1rAq5qudmezW_UJe9mh6sYBoiWaBosDO5TRnEGR1BHzdxmv2EgPK_PSomtZvb043jBQ - type: f1 value: 87.41 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWVhNjAwM2Q5N2Y3MGU4ZWY3N2Y0MmNjYWYwYmQzNTdiYWExODhkYmQ1YjIwM2I1ODEzNWIxZDI1ZWQ1YWRjNSIsInZlcnNpb24iOjF9.Jk0v1ZheLRFz6k9iNAgCMMZtPYj5eVwUCku4E76wRYc-jHPmiUuxvNiNkn6NW-jkBD8bJGMqDSjJyVpVMn9pBA - task: type: question-answering name: Question Answering dataset: name: squad type: squad config: plain_text split: validation metrics: - type: exact_match value: 84.9678 name: Exact Match verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWUxYTg4MzU3YTdmMDRmMGM0NjFjMTcwNGM3YzljM2RkMTc1ZGNhMDQwMTgwNGI0ZDE4ZGMxZTE3YjY5YzQ0ZiIsInZlcnNpb24iOjF9.KKaJ1UtikNe2g6T8XhLoWNtL9X4dHHyl_O4VZ5LreBT9nXneGc21lI1AW3n8KXTFGemzRpRMvmCDyKVDHucdDQ - type: f1 value: 92.2777 name: F1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDU0ZTQwMzg4ZDY1ZWYxOGIxMzY2ODljZTBkMTNlYjA0ODBjNjcxNTg3ZDliYWU1YTdkYTM2NTIxOTg1MGM4OCIsInZlcnNpb24iOjF9.8VHg1BXx6gLw_K7MUK2QSE80Y9guiVR8n8K8nX4laGsLibxv5u_yDv9F3ahbUa1eZG_bbidl93TY2qFUiYHtAQ - task: type: question-answering name: Question Answering dataset: name: adversarial_qa type: adversarial_qa config: adversarialQA split: validation metrics: - type: exact_match value: 30.733 name: Exact Match - type: f1 value: 44.099 name: F1 - task: type: question-answering name: Question Answering dataset: name: squad_adversarial type: squad_adversarial config: AddOneSent split: validation metrics: - type: exact_match value: 79.295 name: Exact Match - type: f1 value: 86.609 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts amazon type: squadshifts config: amazon split: test metrics: - type: exact_match value: 68.680 name: Exact Match - type: f1 value: 83.832 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts new_wiki type: squadshifts config: new_wiki split: test metrics: - type: exact_match value: 80.171 name: Exact Match - type: f1 value: 90.452 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts nyt type: squadshifts config: nyt split: test metrics: - type: exact_match value: 81.570 name: Exact Match - type: f1 value: 90.644 name: F1 - task: type: question-answering name: Question Answering dataset: name: squadshifts reddit type: squadshifts config: reddit split: test metrics: - type: exact_match value: 66.990 name: Exact Match - type: f1 value: 80.231 name: F1 --- # deberta-v3-base for Extractive QA This is the [deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering. ## Overview **Language model:** deberta-v3-base **Language:** English **Downstream-task:** Extractive QA **Training data:** SQuAD 2.0 **Eval data:** SQuAD 2.0 **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline) **Infrastructure**: 1x NVIDIA A10G ## Hyperparameters ``` batch_size = 12 n_epochs = 4 base_LM_model = "deberta-v3-base" max_seq_len = 512 learning_rate = 2e-5 lr_schedule = LinearWarmup warmup_proportion = 0.2 doc_stride = 128 max_query_length = 64 ``` ## Usage ### In Haystack Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents. To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/): ```python # After running pip install haystack-ai "transformers[torch,sentencepiece]" from haystack import Document from haystack.components.readers import ExtractiveReader docs = [ Document(content="Python is a popular programming language"), Document(content="python ist eine beliebte Programmiersprache"), ] reader = ExtractiveReader(model="deepset/roberta-base-squad2") reader.warm_up() question = "What is a popular programming language?" result = reader.run(query=question, documents=docs) # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]} ``` For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline). ### In Transformers ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "deepset/roberta-base-squad2" # a) Get predictions nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) # b) Load model & tokenizer model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Authors **Sebastian Lee:** sebastian.lee [at] deepset.ai **Timo Möller:** timo.moeller [at] deepset.ai **Malte Pietsch:** malte.pietsch [at] deepset.ai ## About us <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3"> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/> </div> <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center"> <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/> </div> </div> [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/). Some of our other work: - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2) - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1) - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio) ## Get in touch and join the Haystack community <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>. We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p> [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai) By the way: [we're hiring!](http://www.deepset.ai/jobs)
kingabzpro/Phi-3.5-mini-instruct-Ecommerce-Text-Classification
kingabzpro
2024-09-24T15:54:24Z
95
1
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "Ecommerce", "Phi-3.5", "Fine-tuned", "conversational", "custom_code", "en", "dataset:saurabhshahane/ecommerce-text-classification", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T15:37:57Z
--- datasets: - saurabhshahane/ecommerce-text-classification language: - en library_name: transformers license: apache-2.0 metrics: - accuracy - f1 pipeline_tag: text-generation tags: - Ecommerce - Phi-3.5 - Fine-tuned --- ## Phi-3.5-mini-instruct-Ecommerce-Text-Classification This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on an [saurabhshahane/ecommerce-text-classification](https://www.kaggle.com/datasets/saurabhshahane/ecommerce-text-classification) dataset. ## Tutorial Customize Phi-3.5-mini-instruct model to predict various Ecommerce Categories from the text. ## Use with Transformers ```python from transformers import AutoTokenizer,AutoModelForCausalLM,pipeline import torch model_id = "kingabzpro/Phi-3.5-mini-instruct-Ecommerce-Text-Classification" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, return_dict=True, low_cpu_mem_usage=True, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True, ) text = "Inalsa Dazzle Glass Top, 3 Burner Gas Stove with Rust Proof Powder Coated Body, Black Toughened Glass Top, 2 Medium and 1 Small High Efficiency Brass Burners, Aluminum Mixing Tubes, Powder Coated Body, Inbuilt Stainless Steel Drip Trays, 360 degree Swivel Nozzle,Bigger Legs to Facilitate Cleaning Under Cooktop" prompt = f"""Classify the E-commerce text into Electronics, Household, Books and Clothing. text: {text} label: """.strip() pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, torch_dtype=torch.float16, device_map="auto", ) outputs = pipe(prompt, max_new_tokens=4, do_sample=True, temperature=0.1) print(outputs[0]["generated_text"].split("label: ")[-1].strip()) # Household ``` ## Results ```bash Accuracy: 0.860 Accuracy for label Electronics: 0.825 Accuracy for label Household: 0.926 Accuracy for label Books: 0.683 Accuracy for label Clothing: 0.947 ``` **Classification Report:** ```bash precision recall f1-score support Electronics 0.97 0.82 0.89 40 Household 0.88 0.93 0.90 81 Books 0.90 0.68 0.78 41 Clothing 0.88 0.95 0.91 38 micro avg 0.90 0.86 0.88 200 macro avg 0.91 0.85 0.87 200 weighted avg 0.90 0.86 0.88 200 ``` **Confusion Matrix:** ```bash [[33 6 1 0] [ 1 75 2 3] [ 0 3 28 2] [ 0 1 0 36]] ```
rgtlai/ai-policy-ft
rgtlai
2024-09-24T15:52:58Z
48
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:200", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-m", "base_model:finetune:Snowflake/snowflake-arctic-embed-m", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-24T15:52:03Z
--- base_model: Snowflake/snowflake-arctic-embed-m library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:200 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: What measures should be taken to ensure that automated systems are safe and effective before deployment? sentences: - " AI BILL OF RIGHTS\nFFECTIVE SYSTEMS\nineffective systems. Automated systems\ \ should be \ncommunities, stakeholders, and domain experts to identify \nSystems\ \ should undergo pre-deployment testing, risk \nthat demonstrate they are safe\ \ and effective based on \nincluding those beyond the intended use, and adherence\ \ to \nprotective measures should include the possibility of not \nAutomated systems\ \ should not be designed with an intent \nreasonably foreseeable possibility of\ \ endangering your safety or the safety of your community. They should \nstemming\ \ from unintended, yet foreseeable, uses or \n \n \n \n \n \n \n \nSECTION TITLE\n\ BLUEPRINT FOR AN\nSAFE AND E \nYou should be protected from unsafe or \ndeveloped\ \ with consultation from diverse \nconcerns, risks, and potential impacts of the\ \ system. \nidentification and mitigation, and ongoing monitoring \ntheir intended\ \ use, mitigation of unsafe outcomes \ndomain-specific standards. Outcomes of\ \ these \ndeploying the system or removing a system from use. \nor \nbe designed\ \ to proactively protect you from harms \nimpacts of automated systems. You should\ \ be protected from inappropriate or irrelevant data use in the \ndesign, development,\ \ and deployment of automated systems, and from the compounded harm of its reuse.\ \ \nIndependent evaluation and reporting that confirms that the system is safe\ \ and effective, including reporting of \nsteps taken to mitigate potential harms,\ \ should be performed and the results made public whenever possible. \nALGORITHMIC\ \ DISCRIMINATION PROTECTIONS\nYou should not face discrimination by algorithms\ \ and systems should be used and designed in \nan equitable way. Algorithmic discrimination\ \ occurs when automated systems contribute to unjustified \ndifferent treatment\ \ or impacts disfavoring people based on their race, color, ethnicity, sex (including\ \ \npregnancy, childbirth, and related medical conditions, gender identity, intersex\ \ status, and sexual \norientation), religion, age, national origin, disability,\ \ veteran status, genetic information, or any other \nclassification protected\ \ by law. Depending on the specific circumstances, such algorithmic discrimination\ \ \nmay violate legal protections. Designers, developers, and deployers of automated\ \ systems should take \nproactive \nand \ncontinuous \nmeasures \nto \nprotect\ \ \nindividuals \nand \ncommunities \nfrom algorithmic \ndiscrimination and to\ \ use and design systems in an equitable way. This protection should include proactive\ \ \nequity assessments as part of the system design, use of representative data\ \ and protection against proxies \nfor demographic features, ensuring accessibility\ \ for people with disabilities in design and development, \npre-deployment and\ \ ongoing disparity testing and mitigation, and clear organizational oversight.\ \ Independent \nevaluation and plain language reporting in the form of an algorithmic\ \ impact assessment, including \ndisparity testing results and mitigation information,\ \ should be performed and made public whenever \npossible to confirm these protections.\ \ \n5\n" - "You should be protected from abusive data practices via built-in \nprotections\ \ and you should have agency over how data about \nyou is used. You should be\ \ protected from violations of privacy through \ndesign choices that ensure such\ \ protections are included by default, including \nensuring that data collection\ \ conforms to reasonable expectations and that \nonly data strictly necessary\ \ for the specific context is collected. Designers, de­\nvelopers, and deployers\ \ of automated systems should seek your permission \nand respect your decisions\ \ regarding collection, use, access, transfer, and de­\nletion of your data in\ \ appropriate ways and to the greatest extent possible; \nwhere not possible,\ \ alternative privacy by design safeguards should be used. \nSystems should not\ \ employ user experience and design decisions that obfus­\ncate user choice or\ \ burden users with defaults that are privacy invasive. Con­\nsent should only\ \ be used to justify collection of data in cases where it can be \nappropriately\ \ and meaningfully given. Any consent requests should be brief, \nbe understandable\ \ in plain language, and give you agency over data collection \nand the specific\ \ context of use; current hard-to-understand no­\ntice-and-choice practices for\ \ broad uses of data should be changed. Enhanced \nprotections and restrictions\ \ for data and inferences related to sensitive do­\nmains, including health, work,\ \ education, criminal justice, and finance, and \nfor data pertaining to youth\ \ should put you first. In sensitive domains, your \ndata and related inferences\ \ should only be used for necessary functions, and \nyou should be protected by\ \ ethical review and use prohibitions. You and your \ncommunities should be free\ \ from unchecked surveillance; surveillance tech­\nnologies should be subject\ \ to heightened oversight that includes at least \npre-deployment assessment of\ \ their potential harms and scope limits to pro­\ntect privacy and civil liberties.\ \ Continuous surveillance and monitoring \nshould not be used in education, work,\ \ housing, or in other contexts where the \nuse of such surveillance technologies\ \ is likely to limit rights, opportunities, or \naccess. Whenever possible, you\ \ should have access to reporting that confirms \nyour data decisions have been\ \ respected and provides an assessment of the \npotential impact of surveillance\ \ technologies on your rights, opportunities, or \naccess. \nDATA PRIVACY\n30\n" - "APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation\ \ \nMITRE Corporation \nNational Association for the \nAdvancement of Colored\ \ People \nLegal Defense and Educational \nFund \nNational Association of Criminal\ \ \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational\ \ Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of\ \ America \nNew America’s Open Technology \nInstitute \nNew York Civil Liberties\ \ Union \nNo Name Provided \nNotre Dame Technology Ethics \nCenter \nOffice of\ \ the Ohio Public Defender \nOnfido \nOosto \nOrissa Rose \nPalantir \nPangiam\ \ \nParity Technologies \nPatrick A. Stewart, Jeffrey K. \nMullins, and Thomas\ \ J. Greitens \nPel Abbott \nPhiladelphia Unemployment \nProject \nProject On\ \ Government Oversight \nRecording Industry Association of \nAmerica \nRobert\ \ Wilkens \nRon Hedges \nScience, Technology, and Public \nPolicy Program at University\ \ of \nMichigan Ann Arbor \nSecurity Industry Association \nSheila Dean \nSoftware\ \ & Information Industry \nAssociation \nStephanie Dinkins and the Future \nHistories\ \ Studio at Stony Brook \nUniversity \nTechNet \nThe Alliance for Media Arts and\ \ \nCulture, MIT Open Documentary \nLab and Co-Creation Studio, and \nImmerse\ \ \nThe International Brotherhood of \nTeamsters \nThe Leadership Conference on\ \ \nCivil and Human Rights \nThorn \nU.S. Chamber of Commerce’s \nTechnology Engagement\ \ Center \nUber Technologies \nUniversity of Pittsburgh \nUndergraduate Student\ \ \nCollaborative \nUpturn \nUS Technology Policy Committee \nof the Association\ \ of Computing \nMachinery \nVirginia Puccio \nVisar Berisha and Julie Liss \n\ XR Association \nXR Safety Initiative \n• As an additional effort to reach out\ \ to stakeholders regarding the RFI, OSTP conducted two listening sessions\nfor\ \ members of the public. The listening sessions together drew upwards of 300 participants.\ \ The Science and\nTechnology Policy Institute produced a synopsis of both the\ \ RFI submissions and the feedback at the listening\nsessions.115\n61\n" - source_sentence: How does the document address algorithmic discrimination protections? sentences: - " \n \n \n \n \n \n \n \n \n \n \n \nSAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD\ \ BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are\ \ meant to serve as a blueprint for the development of additional \ntechnical\ \ standards and practices that are tailored for particular sectors and contexts.\ \ \nOngoing monitoring. Automated systems should have ongoing monitoring procedures,\ \ including recalibra­\ntion procedures, in place to ensure that their performance\ \ does not fall below an acceptable level over time, \nbased on changing real-world\ \ conditions or deployment contexts, post-deployment modification, or unexpect­\n\ ed conditions. This ongoing monitoring should include continuous evaluation of\ \ performance metrics and \nharm assessments, updates of any systems, and retraining\ \ of any machine learning models as necessary, as well \nas ensuring that fallback\ \ mechanisms are in place to allow reversion to a previously working system. Monitor­\n\ ing should take into account the performance of both technical system components\ \ (the algorithm as well as \nany hardware components, data inputs, etc.) and\ \ human operators. It should include mechanisms for testing \nthe actual accuracy\ \ of any predictions or recommendations generated by a system, not just a human\ \ operator’s \ndetermination of their accuracy. Ongoing monitoring procedures\ \ should include manual, human-led monitor­\ning as a check in the event there\ \ are shortcomings in automated monitoring systems. These monitoring proce­\n\ dures should be in place for the lifespan of the deployed automated system. \n\ Clear organizational oversight. Entities responsible for the development or use\ \ of automated systems \nshould lay out clear governance structures and procedures.\ \ This includes clearly-stated governance proce­\ndures before deploying the\ \ system, as well as responsibility of specific individuals or entities to oversee\ \ ongoing \nassessment and mitigation. Organizational stakeholders including those\ \ with oversight of the business process \nor operation being automated, as well\ \ as other organizational divisions that may be affected due to the use of \n\ the system, should be involved in establishing governance procedures. Responsibility\ \ should rest high enough \nin the organization that decisions about resources,\ \ mitigation, incident response, and potential rollback can be \nmade promptly,\ \ with sufficient weight given to risk mitigation objectives against competing\ \ concerns. Those \nholding this responsibility should be made aware of any use\ \ cases with the potential for meaningful impact on \npeople’s rights, opportunities,\ \ or access as determined based on risk identification procedures. In some cases,\ \ \nit may be appropriate for an independent ethics review to be conducted before\ \ deployment. \nAvoid inappropriate, low-quality, or irrelevant data use and the\ \ compounded harm of its \nreuse \nRelevant and high-quality data. Data used as\ \ part of any automated system’s creation, evaluation, or \ndeployment should\ \ be relevant, of high quality, and tailored to the task at hand. Relevancy should\ \ be \nestablished based on research-backed demonstration of the causal influence\ \ of the data to the specific use case \nor justified more generally based on\ \ a reasonable expectation of usefulness in the domain and/or for the \nsystem\ \ design or ongoing development. Relevance of data should not be established solely\ \ by appealing to \nits historical connection to the outcome. High quality and\ \ tailored data should be representative of the task at \nhand and errors from\ \ data entry or other sources should be measured and limited. Any data used as\ \ the target \nof a prediction process should receive particular attention to\ \ the quality and validity of the predicted outcome \nor label to ensure the goal\ \ of the automated system is appropriately identified and measured. Additionally,\ \ \njustification should be documented for each data attribute and source to explain\ \ why it is appropriate to use \nthat data to inform the results of the automated\ \ system and why such use will not violate any applicable laws. \nIn cases of\ \ high-dimensional and/or derived attributes, such justifications can be provided\ \ as overall \ndescriptions of the attribute generation process and appropriateness.\ \ \n19\n" - "TABLE OF CONTENTS\nFROM PRINCIPLES TO PRACTICE: A TECHNICAL COMPANION TO THE\ \ BLUEPRINT \nFOR AN AI BILL OF RIGHTS \n \nUSING THIS TECHNICAL COMPANION\n \n\ SAFE AND EFFECTIVE SYSTEMS\n \nALGORITHMIC DISCRIMINATION PROTECTIONS\n \nDATA\ \ PRIVACY\n \nNOTICE AND EXPLANATION\n \nHUMAN ALTERNATIVES, CONSIDERATION, AND\ \ FALLBACK\nAPPENDIX\n \nEXAMPLES OF AUTOMATED SYSTEMS\n \nLISTENING TO THE AMERICAN\ \ PEOPLE\nENDNOTES \n12\n14\n15\n23\n30\n40\n46\n53\n53\n55\n63\n13\n" - "APPENDIX\nSystems that impact the safety of communities such as automated traffic\ \ control systems, elec \n-ctrical grid controls, smart city technologies, and\ \ industrial emissions and environmental\nimpact control algorithms; and\nSystems\ \ related to access to benefits or services or assignment of penalties such as\ \ systems that\nsupport decision-makers who adjudicate benefits such as collating\ \ or analyzing information or\nmatching records, systems which similarly assist\ \ in the adjudication of administrative or criminal\npenalties, fraud detection\ \ algorithms, services or benefits access control algorithms, biometric\nsystems\ \ used as access control, and systems which make benefits or services related\ \ decisions on a\nfully or partially autonomous basis (such as a determination\ \ to revoke benefits).\n54\n" - source_sentence: What legislation is referenced in the context that became effective on October 3, 2008, regarding biometric information? sentences: - " \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\ HOW THESE PRINCIPLES CAN MOVE INTO PRACTICE\nReal-life examples of how these principles\ \ can become reality, through laws, policies, and practical \ntechnical and sociotechnical\ \ approaches to protecting rights, opportunities, and access. \nThe federal government\ \ is working to combat discrimination in mortgage lending. The Depart­\nment of\ \ Justice has launched a nationwide initiative to combat redlining, which includes\ \ reviewing how \nlenders who may be avoiding serving communities of color are\ \ conducting targeted marketing and advertising.51 \nThis initiative will draw\ \ upon strong partnerships across federal agencies, including the Consumer Financial\ \ \nProtection Bureau and prudential regulators. The Action Plan to Advance Property\ \ Appraisal and Valuation \nEquity includes a commitment from the agencies that\ \ oversee mortgage lending to include a \nnondiscrimination standard in the proposed\ \ rules for Automated Valuation Models.52\nThe Equal Employment Opportunity Commission\ \ and the Department of Justice have clearly \nlaid out how employers’ use of\ \ AI and other automated systems can result in \ndiscrimination against job applicants\ \ and employees with disabilities.53 The documents explain \nhow employers’ use\ \ of software that relies on algorithmic decision-making may violate existing\ \ requirements \nunder Title I of the Americans with Disabilities Act (“ADA”).\ \ This technical assistance also provides practical \ntips to employers on how\ \ to comply with the ADA, and to job applicants and employees who think that their\ \ \nrights may have been violated. \nDisparity assessments identified harms to\ \ Black patients' healthcare access. A widely \nused healthcare algorithm relied\ \ on the cost of each patient’s past medical care to predict future medical needs,\ \ \nrecommending early interventions for the patients deemed most at risk. This\ \ process discriminated \nagainst Black patients, who generally have less access\ \ to medical care and therefore have generated less cost \nthan white patients\ \ with similar illness and need. A landmark study documented this pattern and\ \ proposed \npractical ways that were shown to reduce this bias, such as focusing\ \ specifically on active chronic health \nconditions or avoidable future costs\ \ related to emergency visits and hospitalization.54 \nLarge employers have developed\ \ best practices to scrutinize the data and models used \nfor hiring. An industry\ \ initiative has developed Algorithmic Bias Safeguards for the Workforce, a structured\ \ \nquestionnaire that businesses can use proactively when procuring software\ \ to evaluate workers. It covers \nspecific technical questions such as the training\ \ data used, model training process, biases identified, and \nmitigation steps\ \ employed.55 \nStandards organizations have developed guidelines to incorporate\ \ accessibility criteria \ninto technology design processes. The most prevalent\ \ in the United States is the Access Board’s Section \n508 regulations,56 which\ \ are the technical standards for federal information communication technology\ \ (software, \nhardware, and web). Other standards include those issued by the\ \ International Organization for \nStandardization,57 and the World Wide Web Consortium\ \ Web Content Accessibility Guidelines,58 a globally \nrecognized voluntary consensus\ \ standard for web content and other information and communications \ntechnology.\ \ \nNIST has released Special Publication 1270, Towards a Standard for Identifying\ \ and Managing Bias \nin Artificial Intelligence.59 The special publication: describes\ \ the stakes and challenges of bias in artificial \nintelligence and provides\ \ examples of how and why it can chip away at public trust; identifies three categories\ \ \nof bias in AI – systemic, statistical, and human – and describes how and where\ \ they contribute to harms; and \ndescribes three broad challenges for mitigating\ \ bias – datasets, testing and evaluation, and human factors – and \nintroduces\ \ preliminary guidance for addressing them. Throughout, the special publication\ \ takes a socio-\ntechnical perspective to identifying and managing AI bias. \n\ 29\nAlgorithmic \nDiscrimination \nProtections \n" - " \n \nENDNOTES\n85. Mick Dumke and Frank Main. A look inside the watch list Chicago\ \ police fought to keep secret. The\nChicago Sun Times. May 18, 2017.\nhttps://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought­\n\ to-keep-secret\n86. Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking\ \ Highlighted In Idaho ACLU Case.\nACLU. Jun. 2, 2017.\nhttps://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking­\n\ highlighted-idaho-aclu-case\n87. Illinois General Assembly. Biometric Information\ \ Privacy Act. Effective Oct. 3, 2008.\nhttps://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57\n\ 88. Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022.\nhttps://partnershiponai.org/paper/about-ml-reference-document/1/\n\ 89. See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew\ \ Zaldivar, Parker\nBarnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa\ \ Deborah Raji, and Timnit Gebru.\nModel Cards for Model Reporting. In Proceedings\ \ of the Conference on Fairness, Accountability, and\nTransparency (FAT* '19).\ \ Association for Computing Machinery, New York, NY, USA, 220–229. https://\n\ dl.acm.org/doi/10.1145/3287560.3287596\n90. Sarah Ammermann. Adverse Action Notice\ \ Requirements Under the ECOA and the FCRA. Consumer\nCompliance Outlook. Second\ \ Quarter 2013.\nhttps://consumercomplianceoutlook.org/2013/second-quarter/adverse-action-notice-requirements­\n\ under-ecoa-fcra/\n91. Federal Trade Commission. Using Consumer Reports for Credit\ \ Decisions: What to Know About\nAdverse Action and Risk-Based Pricing Notices.\ \ Accessed May 2, 2022.\nhttps://www.ftc.gov/business-guidance/resources/using-consumer-reports-credit-decisions-what­\n\ know-about-adverse-action-risk-based-pricing-notices#risk\n92. Consumer Financial\ \ Protection Bureau. CFPB Acts to Protect the Public from Black-Box Credit\nModels\ \ Using Complex Algorithms. May 26, 2022.\nhttps://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black­\n\ box-credit-models-using-complex-algorithms/\n93. Anthony Zaller. California Passes\ \ Law Regulating Quotas In Warehouses – What Employers Need to\nKnow About AB\ \ 701. Zaller Law Group California Employment Law Report. Sept. 24, 2021.\nhttps://www.californiaemploymentlawreport.com/2021/09/california-passes-law-regulating-quotas­\n\ in-warehouses-what-employers-need-to-know-about-ab-701/\n94. National Institute\ \ of Standards and Technology. AI Fundamental Research – Explainability.\nAccessed\ \ Jun. 4, 2022.\nhttps://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability\n\ 95. DARPA. Explainable Artificial Intelligence (XAI). Accessed July 20, 2022.\n\ https://www.darpa.mil/program/explainable-artificial-intelligence\n71\n" - " \nENDNOTES\n12. Expectations about reporting are intended for the entity developing\ \ or using the automated system. The\nresulting reports can be provided to the\ \ public, regulators, auditors, industry standards groups, or others\nengaged\ \ in independent review, and should be made public as much as possible consistent\ \ with law,\nregulation, and policy, and noting that intellectual property or\ \ law enforcement considerations may prevent\npublic release. These reporting\ \ expectations are important for transparency, so the American people can\nhave\ \ confidence that their rights, opportunities, and access as well as their expectations\ \ around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative\ \ Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n\ 14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15.\ \ See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional\ \ Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug.,\ \ 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation\ \ has publicly described the health and other benefits of these\n“traffic calming”\ \ measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow\ \ Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow­\n\ Vehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups\ \ are here to help.\nA growing ecosystem of “responsible AI” ventures promise\ \ to help organizations monitor and fix their AI\nmodels. MIT Technology Review.\ \ Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/;\ \ Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for\ \ in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for­\n\ in-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/;\ \ Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for\ \ in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget.\ \ Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021.\ \ https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985­\n\ Implementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards\ \ and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n\ 20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial\ \ Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence\ \ and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n\ 21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence\ \ Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\n\ Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation­\n\ Pathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence\ \ Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for­\n\ the-intelligence-community\n64\n" - source_sentence: How does the Blueprint for an AI Bill of Rights relate to existing laws and regulations regarding automated systems? sentences: - " \n \n \n \n \n \n \n \n \n \n \n \n \n \nAbout this Document \nThe Blueprint\ \ for an AI Bill of Rights: Making Automated Systems Work for the American People\ \ was \npublished by the White House Office of Science and Technology Policy in\ \ October 2022. This framework was \nreleased one year after OSTP announced the\ \ launch of a process to develop “a bill of rights for an AI-powered \nworld.”\ \ Its release follows a year of public engagement to inform this initiative. The\ \ framework is available \nonline at: https://www.whitehouse.gov/ostp/ai-bill-of-rights\ \ \nAbout the Office of Science and Technology Policy \nThe Office of Science\ \ and Technology Policy (OSTP) was established by the National Science and Technology\ \ \nPolicy, Organization, and Priorities Act of 1976 to provide the President\ \ and others within the Executive Office \nof the President with advice on the\ \ scientific, engineering, and technological aspects of the economy, national\ \ \nsecurity, health, foreign relations, the environment, and the technological\ \ recovery and use of resources, among \nother topics. OSTP leads interagency\ \ science and technology policy coordination efforts, assists the Office of \n\ Management and Budget (OMB) with an annual review and analysis of Federal research\ \ and development in \nbudgets, and serves as a source of scientific and technological\ \ analysis and judgment for the President with \nrespect to major policies, plans,\ \ and programs of the Federal Government. \nLegal Disclaimer \nThe Blueprint for\ \ an AI Bill of Rights: Making Automated Systems Work for the American People\ \ is a white paper \npublished by the White House Office of Science and Technology\ \ Policy. It is intended to support the \ndevelopment of policies and practices\ \ that protect civil rights and promote democratic values in the building, \n\ deployment, and governance of automated systems. \nThe Blueprint for an AI Bill\ \ of Rights is non-binding and does not constitute U.S. government policy. It\ \ \ndoes not supersede, modify, or direct an interpretation of any existing statute,\ \ regulation, policy, or \ninternational instrument. It does not constitute binding\ \ guidance for the public or Federal agencies and \ntherefore does not require\ \ compliance with the principles described herein. It also is not determinative\ \ of what \nthe U.S. government’s position will be in any international negotiation.\ \ Adoption of these principles may not \nmeet the requirements of existing statutes,\ \ regulations, policies, or international instruments, or the \nrequirements of\ \ the Federal agencies that enforce them. These principles are not intended to,\ \ and do not, \nprohibit or limit any lawful activity of a government agency,\ \ including law enforcement, national security, or \nintelligence activities.\ \ \nThe appropriate application of the principles set forth in this white paper\ \ depends significantly on the \ncontext in which automated systems are being\ \ utilized. In some circumstances, application of these principles \nin whole\ \ or in part may not be appropriate given the intended use of automated systems\ \ to achieve government \nagency missions. Future sector-specific guidance will\ \ likely be necessary and important for guiding the use of \nautomated systems\ \ in certain settings such as AI systems used as part of school building security\ \ or automated \nhealth diagnostic systems. \nThe Blueprint for an AI Bill of\ \ Rights recognizes that law enforcement activities require a balancing of \n\ equities, for example, between the protection of sensitive law enforcement information\ \ and the principle of \nnotice; as such, notice may not be appropriate, or may\ \ need to be adjusted to protect sources, methods, and \nother law enforcement\ \ equities. Even in contexts where these principles may not apply in whole or\ \ in part, \nfederal departments and agencies remain subject to judicial, privacy,\ \ and civil liberties oversight as well as \nexisting policies and safeguards\ \ that govern automated systems, including, for example, Executive Order 13960,\ \ \nPromoting the Use of Trustworthy Artificial Intelligence in the Federal Government\ \ (December 2020). \nThis white paper recognizes that national security (which\ \ includes certain law enforcement and \nhomeland security activities) and defense\ \ activities are of increased sensitivity and interest to our nation’s \nadversaries\ \ and are often subject to special requirements, such as those governing classified\ \ information and \nother protected data. Such activities require alternative,\ \ compatible safeguards through existing policies that \ngovern automated systems\ \ and AI, such as the Department of Defense (DOD) AI Ethical Principles and \n\ Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics\ \ Principles and \nFramework. The implementation of these policies to national\ \ security and defense activities can be informed by \nthe Blueprint for an AI\ \ Bill of Rights where feasible. \nThe Blueprint for an AI Bill of Rights is not\ \ intended to, and does not, create any legal right, benefit, or \ndefense, substantive\ \ or procedural, enforceable at law or in equity by any party against the United\ \ States, its \ndepartments, agencies, or entities, its officers, employees, or\ \ agents, or any other person, nor does it constitute a \nwaiver of sovereign\ \ immunity. \nCopyright Information \nThis document is a work of the United States\ \ Government and is in the public domain (see 17 U.S.C. §105). \n2\n" - " \nENDNOTES\n12. Expectations about reporting are intended for the entity developing\ \ or using the automated system. The\nresulting reports can be provided to the\ \ public, regulators, auditors, industry standards groups, or others\nengaged\ \ in independent review, and should be made public as much as possible consistent\ \ with law,\nregulation, and policy, and noting that intellectual property or\ \ law enforcement considerations may prevent\npublic release. These reporting\ \ expectations are important for transparency, so the American people can\nhave\ \ confidence that their rights, opportunities, and access as well as their expectations\ \ around\ntechnologies are respected.\n13. National Artificial Intelligence Initiative\ \ Office. Agency Inventories of AI Use Cases. Accessed Sept. 8,\n2022. https://www.ai.gov/ai-use-case-inventories/\n\ 14. National Highway Traffic Safety Administration. https://www.nhtsa.gov/\n15.\ \ See, e.g., Charles Pruitt. People Doing What They Do Best: The Professional\ \ Engineers and NHTSA. Public\nAdministration Review. Vol. 39, No. 4. Jul.-Aug.,\ \ 1979. https://www.jstor.org/stable/976213?seq=1\n16. The US Department of Transportation\ \ has publicly described the health and other benefits of these\n“traffic calming”\ \ measures. See, e.g.: U.S. Department of Transportation. Traffic Calming to Slow\ \ Vehicle\nSpeeds. Accessed Apr. 17, 2022. https://www.transportation.gov/mission/health/Traffic-Calming-to-Slow­\n\ Vehicle-Speeds\n17. Karen Hao. Worried about your firm’s AI ethics? These startups\ \ are here to help.\nA growing ecosystem of “responsible AI” ventures promise\ \ to help organizations monitor and fix their AI\nmodels. MIT Technology Review.\ \ Jan 15., 2021.\nhttps://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/;\ \ Disha Sinha. Top Progressive\nCompanies Building Ethical AI to Look Out for\ \ in 2021. Analytics Insight. June 30, 2021. https://\nwww.analyticsinsight.net/top-progressive-companies-building-ethical-ai-to-look-out-for­\n\ in-2021/ https://www.technologyreview.com/2021/01/15/1016183/ai-ethics-startups/;\ \ Disha Sinha. Top\nProgressive Companies Building Ethical AI to Look Out for\ \ in 2021. Analytics Insight. June 30, 2021.\n18. Office of Management and Budget.\ \ Study to Identify Methods to Assess Equity: Report to the President.\nAug. 2021.\ \ https://www.whitehouse.gov/wp-content/uploads/2021/08/OMB-Report-on-E013985­\n\ Implementation_508-Compliant-Secure-v1.1.pdf\n19. National Institute of Standards\ \ and Technology. AI Risk Management Framework. Accessed May 23,\n2022. https://www.nist.gov/itl/ai-risk-management-framework\n\ 20. U.S. Department of Energy. U.S. Department of Energy Establishes Artificial\ \ Intelligence Advancement\nCouncil. U.S. Department of Energy Artificial Intelligence\ \ and Technology Office. April 18, 2022. https://\nwww.energy.gov/ai/articles/us-department-energy-establishes-artificial-intelligence-advancement-council\n\ 21. Department of Defense. U.S Department of Defense Responsible Artificial Intelligence\ \ Strategy and\nImplementation Pathway. Jun. 2022. https://media.defense.gov/2022/Jun/22/2003022604/-1/-1/0/\n\ Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation­\n\ Pathway.PDF\n22. Director of National Intelligence. Principles of Artificial Intelligence\ \ Ethics for the Intelligence\nCommunity. https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for­\n\ the-intelligence-community\n64\n" - " \n12 \nCSAM. Even when trained on “clean” data, increasingly capable GAI models\ \ can synthesize or produce \nsynthetic NCII and CSAM. Websites, mobile apps,\ \ and custom-built models that generate synthetic NCII \nhave moved from niche\ \ internet forums to mainstream, automated, and scaled online businesses. \n\ Trustworthy AI Characteristics: Fair with Harmful Bias Managed, Safe, Privacy\ \ Enhanced \n2.12. \nValue Chain and Component Integration \nGAI value chains\ \ involve many third-party components such as procured datasets, pre-trained models,\ \ \nand software libraries. These components might be improperly obtained or not\ \ properly vetted, leading \nto diminished transparency or accountability for\ \ downstream users. While this is a risk for traditional AI \nsystems and some\ \ other digital technologies, the risk is exacerbated for GAI due to the scale\ \ of the \ntraining data, which may be too large for humans to vet; the difficulty\ \ of training foundation models, \nwhich leads to extensive reuse of limited numbers\ \ of models; and the extent to which GAI may be \nintegrated into other devices\ \ and services. As GAI systems often involve many distinct third-party \ncomponents\ \ and data sources, it may be difficult to attribute issues in a system’s behavior\ \ to any one of \nthese sources. \nErrors in third-party GAI components can also\ \ have downstream impacts on accuracy and robustness. \nFor example, test datasets\ \ commonly used to benchmark or validate models can contain label errors. \nInaccuracies\ \ in these labels can impact the “stability” or robustness of these benchmarks,\ \ which many \nGAI practitioners consider during the model selection process.\ \ \nTrustworthy AI Characteristics: Accountable and Transparent, Explainable\ \ and Interpretable, Fair with \nHarmful Bias Managed, Privacy Enhanced, Safe,\ \ Secure and Resilient, Valid and Reliable \n3. \nSuggested Actions to Manage\ \ GAI Risks \nThe following suggested actions target risks unique to or exacerbated\ \ by GAI. \nIn addition to the suggested actions below, AI risk management activities\ \ and actions set forth in the AI \nRMF 1.0 and Playbook are already applicable\ \ for managing GAI risks. Organizations are encouraged to \napply the activities\ \ suggested in the AI RMF and its Playbook when managing the risk of GAI systems.\ \ \nImplementation of the suggested actions will vary depending on the type of\ \ risk, characteristics of GAI \nsystems, stage of the GAI lifecycle, and relevant\ \ AI actors involved. \nSuggested actions to manage GAI risks can be found in\ \ the tables below: \n• \nThe suggested actions are organized by relevant AI RMF\ \ subcategories to streamline these \nactivities alongside implementation of the\ \ AI RMF. \n• \nNot every subcategory of the AI RMF is included in this document.13\ \ Suggested actions are \nlisted for only some subcategories. \n \n \n13 As this\ \ document was focused on the GAI PWG efforts and primary considerations (see Appendix\ \ A), AI RMF \nsubcategories not addressed here may be added later. \n" - source_sentence: What proactive steps should be taken during the design phase of automated systems to assess equity and prevent algorithmic discrimination? sentences: - " \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\ \ for automated systems are meant to serve as a blueprint for the development\ \ of additional \ntechnical standards and practices that are tailored for particular\ \ sectors and contexts. \nAny automated system should be tested to help ensure\ \ it is free from algorithmic discrimination before it can be \nsold or used.\ \ Protection against algorithmic discrimination should include designing to ensure\ \ equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited\ \ under existing anti-discrimination law. The \nexpectations set out below describe\ \ proactive technical and policy steps that can be taken to not only \nreinforce\ \ those legal protections but extend beyond them to ensure equity for underserved\ \ communities48 \neven in circumstances where a specific legal protection may\ \ not be clearly established. These protections \nshould be instituted throughout\ \ the design, development, and deployment process and are described below \nroughly\ \ in the order in which they would be instituted. \nProtect the public from algorithmic\ \ discrimination in a proactive and ongoing manner \nProactive assessment of equity\ \ in design. Those responsible for the development, use, or oversight of \nautomated\ \ systems should conduct proactive equity assessments in the design phase of the\ \ technology \nresearch and development or during its acquisition to review potential\ \ input data, associated historical \ncontext, accessibility for people with disabilities,\ \ and societal goals to identify potential discrimination and \neffects on equity\ \ resulting from the introduction of the technology. The assessed groups should\ \ be as inclusive \nas possible of the underserved communities mentioned in the\ \ equity definition: Black, Latino, and Indigenous \nand Native American persons,\ \ Asian Americans and Pacific Islanders and other persons of color; members of\ \ \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual,\ \ transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons\ \ with disabilities; persons who live in rural areas; and persons \notherwise\ \ adversely affected by persistent poverty or inequality. Assessment could include\ \ both qualitative \nand quantitative evaluations of the system. This equity assessment\ \ should also be considered a core part of the \ngoals of the consultation conducted\ \ as part of the safety and efficacy review. \nRepresentative and robust data.\ \ Any data used as part of system development or assessment should be \nrepresentative\ \ of local communities based on the planned deployment setting and should be reviewed\ \ for bias \nbased on the historical and societal context of the data. Such data\ \ should be sufficiently robust to identify and \nhelp to mitigate biases and\ \ potential harms. \nGuarding against proxies. Directly using demographic information\ \ in the design, development, or \ndeployment of an automated system (for purposes\ \ other than evaluating a system for discrimination or using \na system to counter\ \ discrimination) runs a high risk of leading to algorithmic discrimination and\ \ should be \navoided. In many cases, attributes that are highly correlated with\ \ demographic features, known as proxies, can \ncontribute to algorithmic discrimination.\ \ In cases where use of the demographic features themselves would \nlead to illegal\ \ algorithmic discrimination, reliance on such proxies in decision-making (such\ \ as that facilitated \nby an algorithm) may also be prohibited by law. Proactive\ \ testing should be performed to identify proxies by \ntesting for correlation\ \ between demographic information and attributes in any data used as part of system\ \ \ndesign, development, or use. If a proxy is identified, designers, developers,\ \ and deployers should remove the \nproxy; if needed, it may be possible to identify\ \ alternative attributes that can be used instead. At a minimum, \norganizations\ \ should ensure a proxy feature is not given undue weight and should monitor the\ \ system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic\ \ \nDiscrimination \nProtections \n" - " \n \n \n \n \n \n \nHUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nWHAT\ \ SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems\ \ are meant to serve as a blueprint for the development of additional \ntechnical\ \ standards and practices that are tailored for particular sectors and contexts.\ \ \nEquitable. Consideration should be given to ensuring outcomes of the fallback\ \ and escalation system are \nequitable when compared to those of the automated\ \ system and such that the fallback and escalation \nsystem provides equitable\ \ access to underserved communities.105 \nTimely. Human consideration and fallback\ \ are only useful if they are conducted and concluded in a \ntimely manner. The\ \ determination of what is timely should be made relative to the specific automated\ \ \nsystem, and the review system should be staffed and regularly assessed to\ \ ensure it is providing timely \nconsideration and fallback. In time-critical\ \ systems, this mechanism should be immediately available or, \nwhere possible,\ \ available before the harm occurs. Time-critical systems include, but are not\ \ limited to, \nvoting-related systems, automated building access and other access\ \ systems, systems that form a critical \ncomponent of healthcare, and systems\ \ that have the ability to withhold wages or otherwise cause \nimmediate financial\ \ penalties. \nEffective. The organizational structure surrounding processes for\ \ consideration and fallback should \nbe designed so that if the human decision-maker\ \ charged with reassessing a decision determines that it \nshould be overruled,\ \ the new decision will be effectively enacted. This includes ensuring that the\ \ new \ndecision is entered into the automated system throughout its components,\ \ any previous repercussions from \nthe old decision are also overturned, and\ \ safeguards are put in place to help ensure that future decisions do \nnot result\ \ in the same errors. \nMaintained. The human consideration and fallback process\ \ and any associated automated processes \nshould be maintained and supported\ \ as long as the relevant automated system continues to be in use. \nInstitute\ \ training, assessment, and oversight to combat automation bias and ensure any\ \ \nhuman-based components of a system are effective. \nTraining and assessment.\ \ Anyone administering, interacting with, or interpreting the outputs of an auto­\n\ mated system should receive training in that system, including how to properly\ \ interpret outputs of a system \nin light of its intended purpose and in how\ \ to mitigate the effects of automation bias. The training should reoc­\ncur regularly\ \ to ensure it is up to date with the system and to ensure the system is used\ \ appropriately. Assess­\nment should be ongoing to ensure that the use of the\ \ system with human involvement provides for appropri­\nate results, i.e., that\ \ the involvement of people does not invalidate the system's assessment as safe\ \ and effective \nor lead to algorithmic discrimination. \nOversight. Human-based\ \ systems have the potential for bias, including automation bias, as well as other\ \ \nconcerns that may limit their effectiveness. The results of assessments of\ \ the efficacy and potential bias of \nsuch human-based systems should be overseen\ \ by governance structures that have the potential to update the \noperation of\ \ the human-based system in order to mitigate these effects. \n50\n" - " \n \n \nApplying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data\ \ and metadata are sensitive if they pertain to an individual in a sensitive domain\ \ \n(defined below); are generated by technologies used in a sensitive domain;\ \ can be used to infer data from a \nsensitive domain or sensitive data about\ \ an individual (such as disability-related data, genomic data, biometric \ndata,\ \ behavioral data, geolocation data, data related to interaction with the criminal\ \ justice system, relationship \nhistory and legal status such as custody and\ \ divorce information, and home, work, or school environmental \ndata); or have\ \ the reasonable potential to be used in ways that are likely to expose individuals\ \ to meaningful \nharm, such as a loss of privacy or financial harm due to identity\ \ theft. Data and metadata generated by or about \nthose who are not yet legal\ \ adults is also sensitive, even if not related to a sensitive domain. Such data\ \ includes, \nbut is not limited to, numerical, text, image, audio, or video data.\ \ \nSENSITIVE DOMAINS: “Sensitive domains” are those in which activities being\ \ conducted can cause material \nharms, including significant adverse effects\ \ on human rights such as autonomy and dignity, as well as civil liber­\nties\ \ and civil rights. Domains that have historically been singled out as deserving\ \ of enhanced data protections \nor where such enhanced protections are reasonably\ \ expected by the public include, but are not limited to, \nhealth, family planning\ \ and care, employment, education, criminal justice, and personal finance. In\ \ the context \nof this framework, such domains are considered sensitive whether\ \ or not the specifics of a system context \nwould necessitate coverage under\ \ existing law, and domains and data that are considered sensitive are under­\n\ stood to change over time based on societal norms and context. \nSURVEILLANCE\ \ TECHNOLOGY: “Surveillance technology” refers to products or services marketed\ \ for \nor that can be lawfully used to detect, monitor, intercept, collect, exploit,\ \ preserve, protect, transmit, and/or \nretain data, identifying information,\ \ or communications concerning individuals or groups. This framework \nlimits\ \ its focus to both government and commercial use of surveillance technologies\ \ when juxtaposed with \nreal-time or subsequent automated analysis and when such\ \ systems have a potential for meaningful impact \non individuals’ or communities’\ \ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved\ \ communities” refers to communities that have \nbeen systematically denied a\ \ full opportunity to participate in aspects of economic, social, and civic life,\ \ as \nexemplified by the list in the preceding definition of “equity.” \n11\n" model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.7 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 0.9666666666666667 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.7 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19333333333333338 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.10000000000000003 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.7 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9 name: Cosine Recall@3 - type: cosine_recall@5 value: 0.9666666666666667 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.8478532019852957 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.7983333333333333 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.7983333333333333 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.7 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 0.9 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 0.9666666666666667 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.7 name: Dot Precision@1 - type: dot_precision@3 value: 0.3 name: Dot Precision@3 - type: dot_precision@5 value: 0.19333333333333338 name: Dot Precision@5 - type: dot_precision@10 value: 0.10000000000000003 name: Dot Precision@10 - type: dot_recall@1 value: 0.7 name: Dot Recall@1 - type: dot_recall@3 value: 0.9 name: Dot Recall@3 - type: dot_recall@5 value: 0.9666666666666667 name: Dot Recall@5 - type: dot_recall@10 value: 1.0 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.8478532019852957 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.7983333333333333 name: Dot Mrr@10 - type: dot_map@100 value: 0.7983333333333333 name: Dot Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 768 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("rgtlai/ai-policy-ft") # Run inference sentences = [ 'What proactive steps should be taken during the design phase of automated systems to assess equity and prevent algorithmic discrimination?', ' \n \n \n \n \n \n \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations for automated systems are meant to serve as a blueprint for the development of additional \ntechnical standards and practices that are tailored for particular sectors and contexts. \nAny automated system should be tested to help ensure it is free from algorithmic discrimination before it can be \nsold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly \nconstrued. Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The \nexpectations set out below describe proactive technical and policy steps that can be taken to not only \nreinforce those legal protections but extend beyond them to ensure equity for underserved communities48 \neven in circumstances where a specific legal protection may not be clearly established. These protections \nshould be instituted throughout the design, development, and deployment process and are described below \nroughly in the order in which they would be instituted. \nProtect the public from algorithmic discrimination in a proactive and ongoing manner \nProactive assessment of equity in design. Those responsible for the development, use, or oversight of \nautomated systems should conduct proactive equity assessments in the design phase of the technology \nresearch and development or during its acquisition to review potential input data, associated historical \ncontext, accessibility for people with disabilities, and societal goals to identify potential discrimination and \neffects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive \nas possible of the underserved communities mentioned in the equity definition: Black, Latino, and Indigenous \nand Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of \nreligious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and inter-\nsex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons \notherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative \nand quantitative evaluations of the system. This equity assessment should also be considered a core part of the \ngoals of the consultation conducted as part of the safety and efficacy review. \nRepresentative and robust data. Any data used as part of system development or assessment should be \nrepresentative of local communities based on the planned deployment setting and should be reviewed for bias \nbased on the historical and societal context of the data. Such data should be sufficiently robust to identify and \nhelp to mitigate biases and potential harms. \nGuarding against proxies. Directly using demographic information in the design, development, or \ndeployment of an automated system (for purposes other than evaluating a system for discrimination or using \na system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be \navoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can \ncontribute to algorithmic discrimination. In cases where use of the demographic features themselves would \nlead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated \nby an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by \ntesting for correlation between demographic information and attributes in any data used as part of system \ndesign, development, or use. If a proxy is identified, designers, developers, and deployers should remove the \nproxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, \norganizations should ensure a proxy feature is not given undue weight and should monitor the system closely \nfor any resulting algorithmic discrimination. \n26\nAlgorithmic \nDiscrimination \nProtections \n', ' \n \n \nApplying The Blueprint for an AI Bill of Rights \nSENSITIVE DATA: Data and metadata are sensitive if they pertain to an individual in a sensitive domain \n(defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a \nsensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric \ndata, behavioral data, geolocation data, data related to interaction with the criminal justice system, relationship \nhistory and legal status such as custody and divorce information, and home, work, or school environmental \ndata); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful \nharm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about \nthose who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, \nbut is not limited to, numerical, text, image, audio, or video data. \nSENSITIVE DOMAINS: “Sensitive domains” are those in which activities being conducted can cause material \nharms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liber\xad\nties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections \nor where such enhanced protections are reasonably expected by the public include, but are not limited to, \nhealth, family planning and care, employment, education, criminal justice, and personal finance. In the context \nof this framework, such domains are considered sensitive whether or not the specifics of a system context \nwould necessitate coverage under existing law, and domains and data that are considered sensitive are under\xad\nstood to change over time based on societal norms and context. \nSURVEILLANCE TECHNOLOGY: “Surveillance technology” refers to products or services marketed for \nor that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or \nretain data, identifying information, or communications concerning individuals or groups. This framework \nlimits its focus to both government and commercial use of surveillance technologies when juxtaposed with \nreal-time or subsequent automated analysis and when such systems have a potential for meaningful impact \non individuals’ or communities’ rights, opportunities, or access. \nUNDERSERVED COMMUNITIES: The term “underserved communities” refers to communities that have \nbeen systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as \nexemplified by the list in the preceding definition of “equity.” \n11\n', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.7 | | cosine_accuracy@3 | 0.9 | | cosine_accuracy@5 | 0.9667 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.7 | | cosine_precision@3 | 0.3 | | cosine_precision@5 | 0.1933 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.7 | | cosine_recall@3 | 0.9 | | cosine_recall@5 | 0.9667 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.8479 | | cosine_mrr@10 | 0.7983 | | **cosine_map@100** | **0.7983** | | dot_accuracy@1 | 0.7 | | dot_accuracy@3 | 0.9 | | dot_accuracy@5 | 0.9667 | | dot_accuracy@10 | 1.0 | | dot_precision@1 | 0.7 | | dot_precision@3 | 0.3 | | dot_precision@5 | 0.1933 | | dot_precision@10 | 0.1 | | dot_recall@1 | 0.7 | | dot_recall@3 | 0.9 | | dot_recall@5 | 0.9667 | | dot_recall@10 | 1.0 | | dot_ndcg@10 | 0.8479 | | dot_mrr@10 | 0.7983 | | dot_map@100 | 0.7983 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 200 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 200 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 12 tokens</li><li>mean: 22.34 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 447.96 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:----------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What is the purpose of the AI Bill of Rights mentioned in the context?</code> | <code> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022 <br></code> | | <code>When was the Blueprint for an AI Bill of Rights published?</code> | <code> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br>BLUEPRINT FOR AN <br>AI BILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022 <br></code> | | <code>What is the purpose of the Blueprint for an AI Bill of Rights as published by the White House Office of Science and Technology Policy?</code> | <code> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br> <br>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered <br>world.” Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology <br>Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office <br>of the President with advice on the scientific, engineering, and technological aspects of the economy, national <br>security, health, foreign relations, the environment, and the technological recovery and use of resources, among <br>other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of <br>Management and Budget (OMB) with an annual review and analysis of Federal research and development in <br>budgets, and serves as a source of scientific and technological analysis and judgment for the President with <br>respect to major policies, plans, and programs of the Federal Government. <br>Legal Disclaimer <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper <br>published by the White House Office of Science and Technology Policy. It is intended to support the <br>development of policies and practices that protect civil rights and promote democratic values in the building, <br>deployment, and governance of automated systems. <br>The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It <br>does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or <br>international instrument. It does not constitute binding guidance for the public or Federal agencies and <br>therefore does not require compliance with the principles described herein. It also is not determinative of what <br>the U.S. government’s position will be in any international negotiation. Adoption of these principles may not <br>meet the requirements of existing statutes, regulations, policies, or international instruments, or the <br>requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, <br>prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or <br>intelligence activities. <br>The appropriate application of the principles set forth in this white paper depends significantly on the <br>context in which automated systems are being utilized. In some circumstances, application of these principles <br>in whole or in part may not be appropriate given the intended use of automated systems to achieve government <br>agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of <br>automated systems in certain settings such as AI systems used as part of school building security or automated <br>health diagnostic systems. <br>The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of <br>equities, for example, between the protection of sensitive law enforcement information and the principle of <br>notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and <br>other law enforcement equities. Even in contexts where these principles may not apply in whole or in part, <br>federal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as <br>existing policies and safeguards that govern automated systems, including, for example, Executive Order 13960, <br>Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). <br>This white paper recognizes that national security (which includes certain law enforcement and <br>homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s <br>adversaries and are often subject to special requirements, such as those governing classified information and <br>other protected data. Such activities require alternative, compatible safeguards through existing policies that <br>govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and <br>Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and <br>Framework. The implementation of these policies to national security and defense activities can be informed by <br>the Blueprint for an AI Bill of Rights where feasible. <br>The Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or <br>defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its <br>departments, agencies, or entities, its officers, employees, or agents, or any other person, nor does it constitute a <br>waiver of sovereign immunity. <br>Copyright Information <br>This document is a work of the United States Government and is in the public domain (see 17 U.S.C. §105). <br>2<br></code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 5 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | cosine_map@100 | |:------:|:----:|:--------------:| | 1.0 | 13 | 0.7303 | | 2.0 | 26 | 0.7356 | | 3.0 | 39 | 0.7828 | | 3.8462 | 50 | 0.7817 | | 4.0 | 52 | 0.7817 | | 5.0 | 65 | 0.7983 | ### Framework Versions - Python: 3.11.10 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.1 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Gragroo/fine_tuned_model
Gragroo
2024-09-24T15:49:56Z
91
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-24T15:43:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RedSparkie/maximo
RedSparkie
2024-09-24T15:46:32Z
8
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "migrated", "photorealistic", "sexy", "woman", "celebrity", "realistic", "influencer", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-09-15T18:13:22Z
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - photorealistic - sexy - woman - celebrity - realistic - influencer base_model: black-forest-labs/FLUX.1-dev instance_prompt: LLLLT widget: - text: 'A photo of Masi, picket Sign, smokey eyes makeup with black lipstick, blond hair and beautiful eyes. she is holding a Picket sign which says: "¡PROHIBIDO FUMAR!", Long fair blond hair shaved on one side with light roots, perfect face, perfect anatomy, cute, beautiful, perfect beauty, amazing beauty, pure genius, absolutely perfect, gentle, highly detailed, breathtaking, ' output: url: >- 27081925.jpeg - text: 'A photo of Masi, a woman in a blue Adidas tracksuit, is standing on a balcony with her hand on her hip. She is posing for the camera in front of a pool. The scene is set in a tropical location, with palm trees visible in the background. The woman''s Adidas tracksuit is blue and yellow. The overall atmosphere of the photo is relaxed and summery.' output: url: >- 27081922.jpeg - text: 'Masi with a striking, alternative style. She has vibrant, shoulder-length white hair with blunt bangs that frame her face. Her skin is fair, and she has multiple facial piercings, including a septum ring and a lip ring. Her eyes are accentuated with heavy black eyeliner, creating a dramatic look. She wears a black choker with silver metal rings and chains, which adds to her edgy appearance. She''s wearing a black metal tshirt with a logo saying "FUMO" and some tight short jeans. Her lips are painted black, and she has a small nose stud. Her makeup is dark and bold, with a focus on her eyes and lips. The background shows a neutral-toned room with beige walls and a framed black and white print of a palm tree on the wall to the right. The lighting is warm, casting a soft glow over her skin and creating a cozy, intimate atmosphere. The texture of the fabric she is lying on appears to be soft and plush, possibly a blanket or a bedspread.''' output: url: >- 27081927.jpeg - text: 'professional shot, Illustration on studio muted color background, Masi, Sunrise, neon light, extreme Close-up view, by David LaChapelle , 16k, big breasts, wearing a pyjama, full body, lying down on side, ' output: url: >- 27081924.jpeg - text: 'UHD, 8K, ultra detailed, a cinematic photograph of close up portrait Oil paint splatter painterly fantasy artistic painting of an ultra realistic Masi, updo fair brown hair, in a red and black starfleet dress, sitting in the Captains chair on the Enterprise, cinematic dreamy atmosphere. intuitive muted colors, soft brush strokes, epic artwork, oil paint texture, glowing effects, spaceship enterior in the background, beautiful lighting, great composition.' output: url: >- 27081930.jpeg - text: 'A photo of Masi, as if she were a Metal Gear Solid with ps1 graphics. ' output: url: >- 27081926.jpeg --- Not mine. [Original link](https://civitai.com/models/752290/masi-twitch-streamer)
Mayank1996/videomae-base-finetuned-ucf101-subset_fhbh_colab
Mayank1996
2024-09-24T15:43:50Z
46
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-09-24T15:31:52Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset_fhbh_colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset_fhbh_colab This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6292 - Accuracy: 0.75 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 48 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6714 | 0.25 | 12 | 0.6292 | 0.75 | | 0.5293 | 1.25 | 24 | 0.9094 | 0.3333 | | 0.752 | 2.25 | 36 | 1.1135 | 0.4167 | | 1.2968 | 3.25 | 48 | 1.1206 | 0.5 | ### Framework versions - Transformers 4.44.2 - Pytorch 1.13.0+cu117 - Datasets 3.0.0 - Tokenizers 0.19.1
RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf
RichardErkhov
2024-09-24T15:34:57Z
5
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-09-24T08:38:08Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.1-8b-instruct-ultrafeedback-armorm - GGUF - Model creator: https://huggingface.co/simonycl/ - Original model: https://huggingface.co/simonycl/llama-3.1-8b-instruct-ultrafeedback-armorm/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q2_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q2_K.gguf) | Q2_K | 2.96GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K.gguf) | Q3_K | 3.74GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K.gguf) | Q4_K | 4.58GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K.gguf) | Q5_K | 5.34GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_1.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q6_K.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q6_K.gguf) | Q6_K | 6.14GB | | [llama-3.1-8b-instruct-ultrafeedback-armorm.Q8_0.gguf](https://huggingface.co/RichardErkhov/simonycl_-_llama-3.1-8b-instruct-ultrafeedback-armorm-gguf/blob/main/llama-3.1-8b-instruct-ultrafeedback-armorm.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B-Instruct tags: - alignment-handbook - generated_from_trainer datasets: - simonycl/llama3.1-ultrafeedback-annotate-armorm model-index: - name: llama-3.1-8b-instruct-armorm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-3.1-8b-instruct-armorm This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the simonycl/llama3.1-ultrafeedback-annotate-armorm dataset. It achieves the following results on the evaluation set: - Loss: 0.3837 - Rewards/chosen: -3.2511 - Rewards/rejected: -5.1202 - Rewards/accuracies: 0.8644 - Rewards/margins: 1.8691 - Logps/rejected: -797.6878 - Logps/chosen: -602.0981 - Logits/rejected: -1.3603 - Logits/chosen: -1.3921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.4269 | 0.8444 | 400 | 0.3837 | -3.2511 | -5.1202 | 0.8644 | 1.8691 | -797.6878 | -602.0981 | -1.3603 | -1.3921 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1
Mayank1996/videomae-base-finetuned-ucf101-subset
Mayank1996
2024-09-24T15:32:03Z
54
0
transformers
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "base_model:finetune:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
video-classification
2024-09-19T09:32:27Z
--- library_name: transformers license: cc-by-nc-4.0 base_model: MCG-NJU/videomae-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-ucf101-subset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8202 - Accuracy: 0.25 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8867 | 1.0 | 12 | 0.8202 | 0.25 | ### Framework versions - Transformers 4.44.2 - Pytorch 1.10.0+cu113 - Datasets 3.0.0 - Tokenizers 0.19.1
mattshumer/ref_70_e3
mattshumer
2024-09-24T15:28:53Z
21
57
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "base_model:meta-llama/Llama-3.1-70B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-70B-Instruct", "license:llama3.1", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-09-08T18:28:23Z
--- license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-70B-Instruct pipeline_tag: text-generation library_name: transformers --- # Reflection Llama-3.1 70B | IMPORTANT — This is the new, working version of the Reflection Llama 3.1 70B model. Please use this version. **Reflection Llama-3.1 70B is an open-source LLM, trained with a new technique called Reflection-Tuning that teaches a LLM to detect mistakes in its reasoning and correct course.** The model was trained on synthetic data generated by [Glaive](https://glaive.ai). If you're training a model, Glaive is incredible — use them. ## Benchmarks Trained from Llama 3.1 70B Instruct, you can sample from Reflection Llama-3.1 70B using the same code, pipelines, etc. as any other Llama model. It even uses the stock Llama 3.1 chat template format (though, we've trained in a few new special tokens to aid in reasoning and reflection). During sampling, the model will start by outputting reasoning inside `<thinking>` and `</thinking>` tags, and then once it is satisfied with its reasoning, it will output the final answer inside `<output>` and `</output>` tags. Each of these tags are special tokens, trained into the model. This enables the model to separate its internal thoughts and reasoning from its final answer, improving the experience for the user. Inside the `<thinking>` section, the model may output one or more `<reflection>` tags, which signals the model has caught an error in its reasoning and will attempt to correct it before providing a final answer. ## System Prompt The system prompt used for training this model is: ``` You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags. ``` We recommend using this exact system prompt to get the best results from Reflection Llama-3.1 70B. You may also want to experiment combining this system prompt with your own custom instructions to customize the behavior of the model. ## Chat Format As mentioned above, the model uses the standard Llama 3.1 chat format. Here’s an example: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|> what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` ## Tips for Performance - We are initially recommending a `temperature` of `.7` and a `top_p` of `.95`. - For increased accuracy, append `Think carefully.` at the end of your messages. ## Dataset / Report Both the dataset and a brief report detailing how we trained this model will be released next week, alongside our Reflection 405B model that we expect will be the top-performing LLM in the world, including closed-source models. --- Thanks to Jason Kuperberg and Josh Bickett from the [HyperWrite](https://hyperwriteai.com) team for reviewing drafts of the report we'll be releasing next week. Also, we know right now the model is split into a ton of files. We'll condense this soon to make the model easier to download and work with!
fajrijrifa/finetuned-bge-m3_pussertif_mix_model
fajrijrifa
2024-09-24T15:24:39Z
55
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-09-24T15:22:32Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction --- # SentenceTransformer This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) --> - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'The weather is lovely today.', "It's so sunny outside!", 'He drove to the stadium.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jialei12138/Qwen-Qwen1.5-1.8B-1727191383
jialei12138
2024-09-24T15:23:17Z
5
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen1.5-1.8B", "base_model:adapter:Qwen/Qwen1.5-1.8B", "region:us" ]
null
2024-09-24T15:23:02Z
--- base_model: Qwen/Qwen1.5-1.8B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.12.0
sag-uniroma2/u-depp-llama-2-13b
sag-uniroma2
2024-09-24T15:07:58Z
47
0
peft
[ "peft", "bg", "ca", "zh", "hr", "cs", "da", "nl", "en", "fi", "fr", "de", "hu", "id", "it", "ja", "ko", "no", "pl", "pt", "ro", "ru", "sl", "es", "sv", "uk", "sr", "dataset:universal-dependencies/universal_dependencies", "base_model:meta-llama/Llama-2-13b-hf", "base_model:adapter:meta-llama/Llama-2-13b-hf", "license:openrail", "region:us" ]
null
2024-09-24T12:53:08Z
--- license: openrail datasets: - universal-dependencies/universal_dependencies language: - bg - ca - zh - hr - cs - da - nl - en - fi - fr - de - hu - id - it - ja - ko - 'no' - pl - pt - ro - ru - sl - es - sv - uk - sr base_model: - meta-llama/Llama-2-13b-hf library_name: peft --- # Introduction The paper explores the capabilities of Large Language Models (LLMs) like LLaMA in syntactic parsing tasks. We introduce U-DepPLLaMA, a novel architecture that treats Dependency Parsing as a sequence-to-sequence problem, achieving state-of-the-art results in 26 languages from the Universal Dependency Treebank. Our approach demonstrates that LLMs can handle dependency parsing without the need for specialized architectures, showing robust performance even with complex sentence structures. The paper is available [here](https://www.ai-lc.it/wp-content/uploads/2024/08/IJCOL_10_1_2_hromei_et_al.pdf). For more details, please consult the associated [Github repository](https://github.com/crux82/u-deppllama). This model comes in two sizes: - [13 billion parameters](https://huggingface.co/sag-uniroma2/u-depp-llama-2-13b) - [7 billion parameters](https://huggingface.co/sag-uniroma2/u-depp-llama-2-7b) # How to use it ```Python import transformers import torch from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, GenerationConfig from peft import PeftModel quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-13b-hf", load_in_4bit=True, quantization_config=quant_config, torch_dtype=torch.float16, trust_remote_code=True, device_map={"": 0}, ) model = PeftModel.from_pretrained( model, "sag-uniroma2/u-depp-llama-2-13b" ) generation_config = GenerationConfig( num_beams=4, do_sample=False, early_stopping=True, ) tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-hf", trust_remote_code=True) input_string = "He was most widely recognized for some of his books." prompt = f""" ### Input: {input_string} ### Answer:""" inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=512) input_ids = inputs["input_ids"].to(model.device) with torch.no_grad(): gen_outputs = model.generate( input_ids=input_ids, generation_config=generation_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=1024, use_cache=True, ) s = gen_outputs.sequences[0] output = tokenizer.decode(s, skip_special_tokens=True) response = output.split("### Answer:")[1].rstrip().lstrip() print(response) ``` # Citation ``` @article{hromei2024udeppllama, author = "Hromei, Claudiu Daniel and Croce, Danilo and Basili, Roberto", title = "U-DepPLLaMA: Universal Dependency Parsing via Auto-regressive Large Language Models", journal = "IJCoL", year = 2024, volume = "10", number = "1", pages = "21--38" } ```